The AI Gospel and its Original Sin
Contrasting Kate Crawford's "Atlas of AI" with Mary Meeker's latest report, "Trends – Artificial Intelligence"
Mary Meeker, the famous technology analyst known for data-leaden analyses of Silicon Valley trends, has delivered a long-awaited report on artificial intelligence. “Trends – Artificial Intelligence” is pure Meeker: a massive tome driven by graph upon graph, each data point relentlessly building to her vision of AI as an unstoppable growth trend and the new engine of global power.
She describes a world where the pace of change is not only rapid, but compounding, and where the nations and companies that lead in AI will set the terms for the next era of economic and geopolitical dominance.
“AI leadership could beget geopolitical leadership – and not vice-versa,” she argues. For her this is a story of optimism. Which I find curious.
Meeker’s report coincides with my personal discovery of AI lab researcher, scholar and artist Kate Crawford Kate Crawford. She published a book in 2021 titled Atlas of AI. It’s everything that a Meeker presentation is not: an exploration of the material, social, and ethical consequences of this revolution, guided by history rather than the impressive (but also numbing) drumbeat of data points. The forest for the trees.
I believe the AI future Meeker celebrates is real, but it is built atop what I think of as the ‘original sin’ of AI: its reliance upon extraction, exploitation, and inequality.
Gospel
Meeker’s report is a story of acceleration. She opens with the observation that, while the internet once set the benchmark for technological disruption, AI has left it in the dust.
“AI user and usage trending is ramping materially faster...and the machines can outpace us,” she writes, noting that ChatGPT reached 800 million users in just 17 months, a feat that took the internet decades to match. The global adoption of generative AI has been nearly simultaneous, leaping over traditional barriers and unleashing a wave of innovation that is changing how we work, learn, and communicate.
This acceleration is not limited to consumer products. The ‘Big Six’ US technology companies (Apple, Nvidia, Microsoft, Alphabet, Amazon, and Meta; Tesla has lost its place) have increased their capital expenditures by 63 percent in a single year, pouring over $200 billion into AI infrastructure and research.
Underlying this is a profound shift in what counts as strategic infrastructure. Meeker, echoing Nvidia CEO Jensen Huang, describes AI data centers as the “factories” of the 21st century: sites where intelligence, not just goods, is manufactured at scale. Countries that invest in these AI factories – building sovereign AI clouds, training domestic workforces, and developing local models – are, in her view, securing the new levers of geopolitical influence. The U.S. and China, by her count, account for 70% of global AI investment, and their competition is acute and accelerating.
The cost to train frontier models now approaches $100 million or more, and the scale of data, compute, and talent required has shifted the center of gravity from academia to industry. Startups and VC can’t finance this, and even the Softback/OpenAI-backed Stargate project to build huge data centers relies on massive debt funding and a lot of monarchical money from the Middle East. The locus of money is changing to enable the building of ever-more powerful models. Meeker argues, correctly I believe, this is a race to shape the very future of knowledge, productivity, and power.
Optimism
Meeker’s optimism is not just about power; it is about possibility. She sees AI as a democratizing force, one that can level-up knowledge, productivity, and opportunity on a global scale. Her report is filled with examples: generative AI tutors that personalize learning for millions, AI-powered drug discovery that slashes R&D timelines, and open-source models that allow startups in Vietnam or Kenya to build on the same foundation as Silicon Valley giants.
She points to the rapid decline in the cost of AI inference – thanks to advances like Nvidia’s Blackwell GPU, which is 105,000 times more energy efficient per token than its 2014 predecessor – as evidence that powerful AI is becoming accessible to all.
This democratization, Meeker argues, is not just a moral good but an economic imperative. AI, she claims, could add $15.7 trillion to global GDP by 2030, with early adopters capturing the lion’s share of these gains. The velocity of adoption means that the benefits, and the risks, of AI are being distributed faster than ever before. In her view, the “race to the top” will reward those who move quickly, invest boldly, and embrace the uncertainty and creative destruction that AI brings.
Meeker acknowledges the risks, such as job displacement, misinformation, and the concentration of power. She insists that these are manageable through competition, innovation, and thoughtful governance. She invokes the idea of “Mutually Assured Deterrence,” arguing that the stakes are so high that even rivals will be incentivized to cooperate and set global norms.
For Meeker, the future is not just about who wins, but about how the game is played, and she is betting on the power of optimism, agility, and ambition.
Bottom Up
But what if the game itself is rigged? This is the question at the heart of Kate Crawford’s Atlas of AI, a book that asks us to look beyond the gleaming surfaces of AI products and consider the extractive, exploitative, and often invisible systems that make them possible. Crawford’s critique is not aimed at slowing progress, but at exposing its costs – costs that reminds me of an ‘original sin’ that undergirds the AI revolution, just as slavery made possible the rise of European and American wealth.
While Meeker sees data centers as factories of the future, Crawford sees them as the latest iteration of an extractive economy that begins with the land. The minerals that power GPUs – lithium, cobalt, rare earths – are mined in places like the Democratic Republic of Congo and the Atacama Desert, often under brutal conditions. The energy that cools and powers AI’s factories is drawn from grids that are straining under the weight of demand, with data centers now consuming 1.5 percent of global electricity and growing at 12 percent per year. That was before LLMs went public. The environmental toll is staggering: training a single large-language model can emit as much carbon dioxide as an entire fleet of cars; a ChatGPT prompt uses 10 times the electricity of a Google search.
Crawford also attacks the idea of automation as liberation. The apparent magic of AI assistants and chatbots, she argues, is built on the backs of precarious labor: gig workers labeling data for pennies, content moderators in the Global South exposed to the worst of the internet, and warehouse workers surveilled and optimized by algorithms. Even the ‘democratization’ of AI is, in her view, a mirage. Open-source models still depend on centralized infrastructure, and the profits of AI flow overwhelmingly to a handful of firms and countries.
Also troubling is Crawford’s analysis of ‘data colonialism’. The datasets that fuel AI are scraped from the internet, often without consent, and reflect the biases and power imbalances of the societies that produce them. Facial recognition systems are trained on police mugshots, language models ingest copyrighted works and personal data, and the benefits of AI are distributed along familiar lines of privilege and exclusion. The ‘sovereign AI’ partnerships Meeker celebrates are, in Crawford’s view, echoes of colonial dependency: local clouds built on foreign chips, governed by distant standards, and extracting value from local communities without meaningful return.
Power Law
The tension between Meeker’s optimism and Crawford’s critique reflects the contradictions at the heart of the AI revolution. Both agree that the pace of change is unprecedented, that AI is becoming the new infrastructure of society, and that the stakes are nothing less than the future of global power. But where Meeker sees a virtuous cycle of innovation and inclusion, Crawford sees a vicious cycle of extraction and exclusion.
Acceleration is, for both, a double-edged sword. Meeker’s charts show adoption curves steeper than anything in history, with technologies like ChatGPT reaching hundreds of millions of users in record time. But Crawford reminds us that speed can amplify harm as well as benefit: the faster AI spreads, the faster its externalities are felt: environmental, social, and ethical. The ‘race to the top’ Meeker describes can easily become a race to the bottom, as companies and countries cut corners, exploit loopholes, and externalize costs in the name of staying ahead.
This also reminds me of another book I’ve recommended, Power and Progress by Daron Acemoglu and Simon Johnson. They document the history of technology and find that some advances are indeed socially beneficial while others are devastating to workers. It comes down to whether the powers who control and finance these technologies are using them to make people more productive, or to simply eliminate pesky workers from the business. With AI, the productivity of people who embrace it is going to be phenomenal. But the carnage is going to be unprecedented.
This is not a Luddite argument for blocking AI. I too use LLMs as part of my research process. But AI can be and should be better regulated. Watermarking and providence would be good starts for recognizing the labor of everyone whose work and data has been stolen to train LLMs. This is technically feasible, but anathema to the handful of people who control these technologies – and now the very governments we’d expect to look to for some protection and guardrails are falling under the sway of the industry’s arguments about geopolitical competition.
I also believe corporate leaders need to be more discerning about how they use AI. Everyone I interview these days is using the technology. Most are doing so to shed workers and turn the remainder into productivity superheroes. To me this is a short-term strategy that risks undermining the future of the business. A minority of thoughtful founders have told me they are not firing people but using AI to grow everyone’s capabilities. Mass firings are a choice, and there are ways to remain competitive without it. Moreover, I wonder what happens over time to companies that have stopped grooming new generations of talented people, versus those that have maintained a hiring pipeline and been even more ambitious in what they imagine their teams can do with AI. While this is not a regulatory issue, it is a symptom that most CEOs and founders are simply accepting AI ‘as is’, and not questioning these trends.
Meeker’s vision of AI leadership as the new currency of geopolitical influence makes this feel inevitable, as her data on capital expenditures, patent filings, and sovereign AI initiatives makes clear. But Crawford warns that this concentration of power, whether in the hands of tech giants or authoritarian states, risks entrenching new forms of inequality and control. The same AI systems that enable personalized learning and healthcare can also enable surveillance, manipulation, and exclusion. Just look at the streets of America today, where immigration and police units are using AI to pinpoint where and when to pick up innocent people for deportation.
This makes the techno-capitalist promise of democratization fraught. Meeker’s examples of AI-powered microloans and generative tutors are real, and they matter. But Crawford’s analysis shows how these benefits are often unevenly distributed, with the costs borne by those least able to pay. The ‘leveling up’ of knowledge and opportunity is basically at the whim of those who control capital, technology and governance.
Original Sin
Crawford recounts foundational acts of extraction, exploitation, and enclosure that make the current AI boom possible. This isn’t just historical: the same attitudes and behavior continue, reproduced in every new data center, every outsourced labeling contract, every dataset scraped without consent. The very idea of intellectual property seems to have gone out the window. It’s writers and artists who are feeling this now, but businesses have IP too, and they shouldn’t expect special favors.
Meeker’s report is ahistorical. She focuses on the present and the future: the compounding returns of investment, the velocity of adoption, the potential for AI to “solve everything else.” But as Crawford insists, we cannot build a just and sustainable AI future on a foundation of injustice and unsustainability. The environmental costs of data centers, the exploitation of invisible labor, and the enclosure of data as capital are not bugs. They’re features.
This is not a call to reject AI or to halt progress. Rather, it is a call to reckon with the full costs of the revolution we are living through. The world Meeker describes is real, and her case for optimism is strong. The point is not to say, well, bad stuff happened, so let’s go back to some imagined golden era. But if we accept her vision uncritically, we risk repeating the mistakes of past technological revolutions: externalizing costs and privatizing gains, worshipping the winners while ignoring the losers. AI’s speed and scale feels like progress. It is indeed a disruptive general technology. This is exciting. It’s also dangerous. I’m not talking about the robots deciding to enslave humans; I’m more worried about a few humans using AI to enslave the rest of us.
Reckoning
The reckoning Crawford calls for is not just about ethics or regulation; it is about power, ownership, and accountability. It is about recognizing that the benefits of AI will not be evenly distributed unless we make them so, and that the costs will not be mitigated unless we confront them directly. This means investing in renewable energy for data centers, enforcing labor standards for data workers, auditing datasets for bias and consent, and building governance structures that reflect the interests of all stakeholders, not just the most powerful.
It also means acknowledging that the ‘original sin’ of AI is not inevitable. It is the product of choices: about what to value, who to include, and how to govern.
Mary Meeker’s report is a testament to the power of optimism, ambition, and speed. Her vision of AI as the engine of a new world order is already shaping the strategies of companies and countries alike. But as Kate Crawford reminds us, the foundations of this world are built on acts of extraction and exclusion that cannot be wished away.
The task before us is not to reject Meeker’s future, but to redeem it, by confronting the original sins of the AI revolution and building a more just, sustainable, and inclusive foundation for what comes next. This will require more than charts and capital; it will require a willingness to slow down, to look back, and to ask hard questions about the world we are making. To simply let it wash over us as an inevitability is to risk losing more than we can imagine.

