AGI mirage: Karen Hao's "Empire of AI"
Karen Hao's disturbing look inside the workings and raison d'etre of Sam Altman's OpenAI.
The AI industry’s fixation on scaling has hit the buffer with the release of ChatGPT-5 earlier this week, which has been a massive disappointment – for them. For the rest of us, it’s a reprieve.
This chastening moment happened when I also finished reading Karen Hao’s “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”. Her deep dive into the story of OpenAI provides an important and useful context in which to place this moment.
This should matter to people in finance and fintech, the users of AI. The industry has had a serious case of the AI hypies for the past three years, following the release of ChatGPT3.5 in late 2022 which hit the world like a nuke.
This coincided with the collapse of Sam Bankman-Fried’s FTX, and rescued venture capitalists, tech gurus, and other hypsters. Crypto was out – the metaverse was falling – and generative AI was the future.
AI is much more real to businesses than NFTs or Mark Zuckerberg’s legless avatar bumping into walls in digitally empty warehouses. Neural networks, developed on the same reasoning as large-language models, had been usefully adopted by enterprise over the previous decade.
The excitement over ChatGPT was real. These models could feel incredibly powerful, especially at first blush. They have uses; I use them. But the hallucinations, a polite term for fuckups, are real too, and it looks as though they are persistent.
Scaling – indeed, hyperscaling – is a “more is less” idea that these hallucinations were problems of compute. More chips, more crunching, and more data would allow LLMs to fine-tune their correlations and pattern matching, sandpapering errors to a thin layer, so thin as to be luminescent – barely noticeable.
There were two problems with this approach. First, the scaling required here is mind-bogglingly huge. Thus we saw Google, Meta, Amazon and Microsoft building cities’ worth of data centers, recommissioning nuclear power plants to provide electricity, and desperately stealing whatever data was left to purloin. As the carbon footprint of AI exploded, so did the costs. Stargate, Sam Altman’s idea for a mega data center complex, is meant to require a $500 billion investment that only Softbank can bankroll, through syndicating debt – although eight months since the company was announced, no funds have been raised. The financials probably don’t add up (as illustrated by Ed Zitron, a PR man turned AI gadfly), never mind the environmental toll were this project to go ahead.
The second problem with scaling is that it doesn’t work. This was a matter for debate until now, but ChatGPT-5 is not the step change of previous versions. It is a smoother writer and a better coder than ChatGPT-4, but incrementally, not by an order of magnitude, despite OpenAI and Microsoft having spent an order of magnitude more on compute and data.
Even more fundamentally, it doesn’t appear to have done anything to improve hallucination rates. Which means ChatGPT and the LLM model is no longer taking us closer to artificial general intelligence – the purported goal and raison d’être of OpenAI.
For users of AI, the AGI superintelligence question may be beside the point. What we want is simply tools to help us fire people make us more efficient. Skynet and the Terminator are somebody else’s problem. Karen Hao’s book connects the dots to demonstrate why it’s also our problem. Not because we’re about to be wiped out by Arnold Schwarzenegger, but because the killer-AI narrative has been cynically used by the AI industry to push defective products on everyone else.
She opens with the drama of late 2023 when the board of OpenAI fired Sam Altman, only for the board to cave under pressure from employees and Microsoft, an effort coordinated by Altman and his sidekicks.
The affair came amid ChatGPT’s rocketing success and threw a light on why OpenAI existed in the first place.
“The rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego,” Hao writes.
The altruism was supposed to be about making a drive for AGI that would emphasize safety.
The industry was consumed with Doomerism. There was good reason. British scientists Nick Bostrom had written a thought experiment about an AGI programmed to produce paperclips that would go to ruthless extents to carry out its programming. Google co-founder Larry Page had told his peers that humans deserved to be overtaken by a superior AI race, and had hired Dennis Hassabis at DeepMind to make this happen.
OpenAI was the brainchild of both Altman and Elon Musk, who made it a non-profit to burnish their good-guy credentials. It would build AGI with ‘alignment’ principles to ensure it wouldn’t want to hurt humankind. The company went on to throw a lot of spaghetti at the wall and what ended up sticking was an early version of a chatbot using tech invented at Google that was an excellent predictor of words in composing text.
Claiming to build AGI, with a safety mandate, turned out to be a great way to raise money and do whatever OpenAI wanted, while blunting alternative research or more meaningful regulation. AGI itself is a woolly concept. Its definition keeps changing. The goalposts for determining whether a system is superintelligent keep shifting.
This isn’t to say AGI is impossible; some idea of it could well materialize in the next decade. But it is a boogeyman, and a great story that people lap up. Every discussion about AI at a conference raises the idea, and policymakers find it an easy win to publicly worry about – far easier than what to do about AI’s impact on labor markets, media, the environment, or truth itself.
For OpenAI it was also a good talking point to make AI seem inevitable. This is now the talking point of CEOs everywhere: AI is inevitable so if you don’t use these tools, you’ll be fired and it’ll be your fault. (Or, more accurately for big institutions, you have to use our proprietary AI copilot tools trained on our data, or you’re fired.)
For OpenAI’s narrative, Hao reports, AGI had to be not just inevitable, but something we should welcome. Welcome but without asking too many questions: transparency was never part of the deal.
Ilya Sutskever, OpenAI’s chief technology officer, emailed his colleagues in 2016, “The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK to not share the science.”
“Yup,” replied Musk, Hao reports.
And OpenAI’s idea of safety was vague and hand-wavey. Dario Amodei, then at Google and now at OpenAI competitor Anthropic, positioned the problem in AI safety as “the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems.”
The industry needs the AGI boogeyman to avoid other kinds of safety. But back in 2016, insiders were already arguing that systems had to be placed in the context of the things that Amodei wanted to ignore, including privacy, fairness, and economics. (Amodei went on to join OpenAI, was disturbed by what he saw as Altman’s break from the non-profit’s premise, and went off to found Anthropic; in practice, there’s no difference in ethics or behavior between the two companies, based on Hao’s account.)
LLMs proved easy to commercialize, and so OpenAI chased the necessary scale of compute and data. The costs of financing this, the growing revenue opportunities, and the need to compensate OpenAI developers led the leadership to build awkward workarounds its non-profit governance structure. Altman’s mild-mannered Clark Kent persona led him to promise everything to everyone – investors, Microsoft, staff, partners – which required a slipperiness that could bleed into lying. This unsettled the board as well as Sutskever and the company’s diplomatic COO, Mira Murati. Hence the attempted coup.
But you know the saying: if you’re gonna take a shot at the king, best not miss. And they missed.
Hao details the coup and its aftermath through years of dogged reporting. She scored an early interview with Altman and OpenAI’s PR machine. She was skeptical of their claims of good-guyness (we’re going to solve climate change! cure cancer!), the need to be the first (what about the Chinese? Or a misanthrope like Larry Page?), and the need for paranoid secrecy.
She relates, “I tried again to press for more details. ‘What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.”
She became persona non grata there, and wrote “Empire of AI” without Altman or the company’s cooperation, but her reporting is extensive and impressive.
Fundamentally, Hao, a seasoned journalist, doesn’t fall for their bullshit. And she doesn’t pull her punches:
“Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources…
“Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires. During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjected to mine, cultivate, and refine those resources for the empire’s enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify – and even entice the conquered into accepting – the invasion of sovereignty, the theft, the subjugation. They justified their quest for power by the need to compete with other empires: an arms race, all bets are off.”
The empires of AI don’t steal physical land and enslave people, but Hao’s book explains how they do engage in a 21st century version of seizure, theft, and exploitation, all in the name of a very wobbly notion of progress and security.
“Empire of AI” should warn corporate boards and execute teams that AI tools need to be treated with care. The hallucinations of hyperscale models are features, not bugs, and won’t go away. This renders them dangerous to use. Workarounds such as building small-language models to focus just on approved data help but aren’t enough: they hallucinate too.
The question isn’t what Hao thinks: it’s whether widescale genAI application within a business really leads to sustainable productivity gains. Has it? How many people are now spending time reviewing or correcting AI mistakes? Does this make them more productive or less productive? Do companies need to consider the carbon impact of genAI? I realize with Trump in the White House, “ESG” is out of favor and even sneered at, but that doesn’t mean the reality of climate change is false.
What Altman and his peers have done is create a virtual reality around the dreams and fears of AGI to bend reality to their will. It’s similar to how Trump has built a TV distortion field around his seizure of power, or how the crypto world has created a new virtual reality around money. The world has no choice but to respond. We are being bent around these virtual gravity machines.
But there’s also a physical world, a material world, a real world. Sometimes this world is warped by these virtual distortion fields, but it can also smash them. Trump’s tariff war against China is one such case. Another is the costs of the AI industry as it has become enthralled to hyperscaling. Those costs are going to include a technological debt for companies that have drank the genAI Kool-Aide too deeply; they may spend years dealing with surprises in their coding.
This does not mean AI is a fraud. It is not. GenAI results look like BS to someone expert in a field, but they are innocuous to generalists, and can often be helpful if taken with a pinch of salt. The field of AI and machine learning is much bigger than genAI, and there are forms of AI that don’t rely on the correlation-dependencies of neural networks, but can be causal and accurate: these don’t scale commercially, but they exist.
Hao describes the way the OpenAI virtual-reality machine has driven the entire industry to discard such alternatives and pour unsustainable amounts of financing into the delusion of hyperscaling LLMs. With ChatGPT-5, have now hit the buffer.
The good news is that perhaps this will lead the industry to rediscover other paths of exploration. LLMs are with us now and are part of the story, but it will only be in combination with other types of tech that we get to something akin to AGI. (Gary Marcus, a longtime skeptic of LLMs, calls this synthesis of AI schools ‘neurosymbolic AI’, a term I expect we’ll hear more of now.)
But maybe not, or at least not right away. Too much has been bet on LLMs. The entire industry – the entire US stock market – is leveraged on LLMs. Too many boards and C-suites are signing contracts with Microsoft and Google to reshape their businesses with genAI.
Hao’s story ends in early 2025, just as DeepSeek burst onto the scene. Sadly it came too late for Hao to include it in the book. DeepSeek didn’t pioneer new LLM models but it took the existing corpus, via open-source channels, and designed its own LLMs of comparable power but at a fraction of the cost.
This should be hailed by everyone as a victory because it means less need for chips and compute, less burden on the environment, more access. For a moment, Wall Street wavered, Nvidia stock took a hit, there was panic. But the reality-distortion machine reasserted itself and the Nasdaq and S&P500 are even higher today.
Now we have the reality that GPT-5 is not a step improvement, and that hyperscaling hasn’t solved its problems. But no one in America wants this story. No one likes this narrative. It’s too problematic for everyone who’s bought into the genAI cult.
Hao recalls the remark of Paul Graham, founder of Y Combinator and a mentor of Altman, that “Sam is extremely good at becoming powerful.”
Altman has built his empire. He has concentrated enormous resources under an opaque corporate vehicle that has bent the agendas of the world’s business community, from huge multinationals to fintech startups, all in the name of shepherding a safe, pro-human machine intelligence. He would pump up the team in times of crisis by chanting, “Feel the AGI!”
A few days after the August 7 ChatGPT-5 launch, Altman told CNBC that AGI is no longer his frame of reference. “I think it’s not a super-helpful term,” he said.
“Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”, by Karen Hao, Penguin Press, New York: 2025.


