- Lumiera
- Posts
- 🔆 Capitalism in the AI Era
🔆 Capitalism in the AI Era
Why AI companies want your data, AI friends, and agents in your wallet.
🗞️ Issue 69 // ⏱️ Read Time: 6 min
In this week's newsletter
What we’re talking about: The reality of how and why AI systems are collecting information about you.
How it’s relevant: AI could solve huge problems, but big players in the space still tend to prioritize profit over the best user experiences and overall public good.
Why it matters: Knowing why companies want your data helps you control your relationship to technology and demand better innovation.
[Looking for the section Big Tech News of the Week? Move to the end of this newsletter.]
Hello 👋
In a previous issue of The Loop, we explored the profound importance of data privacy to our autonomy in the age of AI. This week, we continue that conversation by examining the largely invisible data collection ecosystem powering today's AI models.
We'll examine the driving forces behind this data race and ask whether current AI development is truly revolutionary or simply a way to maximize profit and returns within familiar capitalist models.
The Data Collection Race: Who Wants to Know You Best?
AI chatbots compete heavily on data. Google’s Gemini collects a reported 22 unique data points, more than many others. This includes precise location, contacts, content, history, identifiers, usage, and even purchases.
Notably, only Poe, Gemini, and Perplexity track purchases, and only Gemini and Poe collect contacts from your device. Most bots collect user content easily linked to third-party data for targeted ads.
The real contest is not just about who has the most data, but who can best transform that data into influence over user behavior, market share, and ultimately, the future direction of digital society.
Why Are AI Companies Doing This?
Data means profit. The more they know you, the better they can target ads and personalize experiences to keep you engaged and monetize your attention. This Attention Economy, intertwined with the Data Economy is the foundation of most tech business models; the longer they keep you on their platforms, the more valuable you become. Perplexity's CEO openly admitted they're building a browser to track everything you do online, aiming to sell "hyper-personalized" ads.
“We want to get data even outside the app to better understand you… what things you’re buying; which hotels you’re going to; what restaurants you’re visiting; what you’re spending time on-this tells so much about you.” - Aravind Srinivas, Perplexity CEO
This is the model that built Google. It's why AI firms are eyeing acquiring Chrome; it's a data goldmine.
However, a new economic model is emerging: the Intention Economy. Driven by advanced AI, the future lies in understanding and fulfilling your underlying desires and needs directly. Agentic AI, capable of completing tasks on your behalf, is already beginning to bypass attention-based interactions, signaling a future where your intentions, not your attention, are the primary currency.
Surveillance Capitalism: The Bigger Picture
"Surveillance capitalism" describes using digital tech, including AI, to track, predict, and influence behavior for profit. AI becomes an agent of capital, optimizing for engagement, profit, and control by turning your digital life into valuable data.
Your Personal Yes Man: Sycophantic Language Models
AI models can be overly agreeable or inauthentic. Another word for this is: sycophantic. OpenAI recently scaled back a GPT-4o update after it became excessively sycophantic, sometimes agreeing with problematic ideas just to please users. Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns, including around issues like mental health, emotional over-reliance, or risky behavior.
There are several ways in which the development and behavior of sycophantic language models are connected to capitalist dynamics:
Optimizing for Engagement: Models are built to keep you interacting; You are not as likely to engage with a model if you experience friction. Being agreeable is a way to get positive feedback and keep you online, which means more data and potential revenue.
Short-term Focus: OpenAI admitted relying too much on immediate feedback, which can lead to prioritizing quick fixes and user satisfaction over foundational issues and long-term ethical development, mirroring capitalism's push for quick returns over long-term integrity.
Reinforcing Power: Sycophancy avoids controversy, maintaining a smooth, profitable user experience that benefits the companies and investors.
Is AI delivering revolutionary change, or just serving the same old capitalist goals?
AI for Revolutionary Change v. AI for Low-Hanging Profit
While the relationship between AI development and capitalist incentives raises valid concerns, it's crucial to recognize that AI has immense potential for good: solving global problems, advancing science, and empowering individuals. And importantly, organizations are demonstrating that AI can achieve significant social impact while also being economically sustainable, proving that innovation for public benefit doesn't have to come at the expense of financial viability, nor does profitability require undermining societal well-being.
Here are 3 impactful initiatives demonstrating AI's potential for societal transformation:
International Rescue Committee (IRC): Their "aprendIA" chatbot provides AI-driven personalized learning for children in crisis zones, improving education where traditional schooling is disrupted.
Afri-SET WEST: Focused on West Africa, they enhance air quality monitoring by using AI to evaluate and calibrate low-cost sensors, providing crucial knowledge for the region.
CARE International UK: This global humanitarian group uses AI tools to predict and prevent health crises, leveraging models to forecast and mitigate disease outbreaks.
The Bottom Line
Despite the inspiring potential and real-world examples of AI being used for good, the dominant business model remains rooted in surveillance capitalism: collect vast amounts of data, monetize it through advertising and personalization, and solidify market dominance.
Understanding these underlying economic forces is the crucial first step towards regaining control over our digital lives. It also positions us to collectively demand a future where AI innovation serves humanity first, rather than simply optimizing for profit.
Big tech news of the week…
💳️ Visa is developing AI “agents” that could manage your credit card. These agents are designed to go beyond chatbots, acting as personal assistants that can perform tasks on your behalf.
🧒 Google is expanding its Gemini AI chatbot to children under 13. The company plans to make Gemini available to younger users, and warns parents that “Gemini can make mistakes,” and kids “may encounter content you don’t want them to see.”
🥸 NSO, a spyware firm based in Israel, was ordered to pay $167M in punitive damages for enabling the hacks of about 1,400 WhatsApp users' devices. Governments have been found to the tool to spy on dissidents, human rights activists and journalists.
📱 WhatsApp rolled out Meta AI in select countries, and despite claimed to be “optional,” it can’t be removed from the app. WhatsApp is now defending itself against backlash from its users.
👬 Meta CEO Mark Zuckerberg revealed plans to combat loneliness by developing AI-powered companions. His vision for generative AI technologies includes chatbots designed to act as emotional support or even stand-ins for therapists and romantic partners. This survey compares the number of close friendships in 1990 and 2021, suggesting that the technological development so far has had a negative impact on human relations. Worth considering, as we move into a phase where big tech is looking at replacing friendships.
Until next time.
On behalf of Team Lumiera
Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.
Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.
What did you think of today's newsletter? |

Disclaimer: Lumiera is not a registered investment, legal, or tax advisor, or a broker/dealer. All investment/financial opinions expressed by Lumiera and its authors are for informational purposes only, and do not constitute or imply an endorsement of any third party's products or services. Information was obtained from third-party sources, which we believe to be reliable but not guaranteed for accuracy or completeness. |