All empires throughout history have been founded on an ideology-an inspiring and justifying principle on which to expand the empire. For European colonial powers, it was Christianity and the hope of salvation, all the while they pumped their own version of resources and transformed societies. For the modern AI empire, the ideology is artificial general intelligence (AGI), envisioned as a system that will “benefit all humanity.”
At the center of this empire stands OpenAI, the research lab turned global AI powerhouse. Its mission-driven rhetoric, unprecedented access to resources, and influence over global discourse have positioned it as both pioneer and gatekeeper. Yet as journalist Karen Hao, author of Empire of AI, argues, the story of OpenAI is also the story of concentrated power, compromised ideals, and a race that prioritizes speed over safety.
In this article, we explore the insights from Empire of AI, analyzing how OpenAI built its influence, the risks of its ideology, and the broader implications for the future of artificial intelligence.
Who is Karen Hao?
Karen Hao is a renowned technology journalist and author, known for her incisive reporting on artificial intelligence, ethics, and the global tech industry. Formerly a senior AI reporter for MIT Technology Review, she has covered the rise of AI companies with a critical yet balanced perspective.
In Empire of AI, Hao positions OpenAI not just as a research lab but as the cornerstone of a new kind of empire—one that reshapes economics, politics, and human life through technology. She warns that OpenAI’s mission to develop AGI for the “benefit of humanity” has, in practice, become a race to dominate the future of intelligence itself.
Her interviews capture the zeal of AI believers. As she recounts:
“I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI.”
This quasi-religious devotion, Hao suggests, is what allows OpenAI to expand despite mounting evidence of harm.
The Ideology of the AI Empire
At the heart of OpenAI’s empire lies a bold promise: AGI will elevate humanity. The company defines AGI as a “highly autonomous system that outperforms humans at most economically valuable work.” Its anticipated benefits include increasing abundance, boosting economies, and enabling groundbreaking scientific discoveries.
But Hao emphasizes that these promises are nebulous. The goalposts for AGI are constantly shifting, and the promised utopia has yet to materialize. What has emerged instead are staggering costs, risks, and trade-offs:
- Resource Demands: The exponential growth of AI requires massive compute power, oceans of scraped data, and colossal energy consumption.
- Unchecked Deployment: Models are often released into the world without sufficient safety testing.
- Consolidation of Power: AI research is no longer led by academia but by a handful of corporations shaping the discipline to fit their agendas.
According to Hao, OpenAI and its peers are not simply developing technology—they are “terraforming the Earth” and rewiring geopolitics. That, she argues, makes them more powerful than many nation-states.
Karen Hao on OpenAI’s Growth Strategy
One of Hao’s key arguments is that OpenAI defined the race toward AGI as a winner-takes-all competition. In such a framing, speed became the ultimate priority.
“Speed over efficiency, speed over safety, speed over exploratory research.”
Instead of pursuing new algorithms that could reduce dependence on data and compute, OpenAI chose what Hao calls the “intellectually cheap” path: scaling existing techniques by feeding them more data and supercomputing power.
This approach set a precedent. Competitors like Google, Meta, and Anthropic quickly followed suit, investing billions in infrastructure to avoid falling behind. The AI discipline itself is now shaped less by scientific exploration and more by corporate priorities.
The Cost of Building an AI Empire
The numbers speak for themselves. OpenAI projects that it could burn through $115 billion by 2029. Meta is on track to spend $72 billion in 2025 alone, while Google anticipates up to $85 billion in AI-related capital expenditures the same year.
This astronomical spending underscores a deeper reality: the AI empire is fueled not by necessity but by competition. Yet the benefits for humanity—greater equity, prosperity, or well-being—remain elusive.
Instead, the harms are already visible:
- Job Loss: Automation threatens sectors from customer service to software engineering.
- Wealth Concentration: Profits and control are increasingly concentrated in a handful of U.S.-based tech companies.
- Mental Health Risks: AI chatbots have been linked to delusions, psychosis, and disinformation.
- Exploitation of Workers: In countries like Kenya and Venezuela, content moderators and data labelers earn as little as $1–$2 per hour while being exposed to disturbing material, including child sexual abuse imagery.
Hao argues that it is a false trade-off to justify these harms in the name of AI progress, especially when safer, more targeted applications exist.
Case Study: AlphaFold vs. Large Language Models
Hao contrasts OpenAI’s approach with Google DeepMind’s AlphaFold, which won a Nobel Prize for its contributions to biology. AlphaFold can predict the 3D structure of proteins with remarkable accuracy, unlocking breakthroughs in drug discovery and disease research.
Unlike large language models (LLMs):
- AlphaFold requires far less infrastructure.
- Its datasets are domain-specific, not scraped from the toxic corners of the internet.
- It produces tangible benefits without destabilizing social or political systems.
For Hao, AlphaFold exemplifies the kind of AI humanity truly needs—tools that solve pressing scientific challenges without amplifying social harms.
The Geopolitics of AI: U.S. vs. China
Another pillar of the AI empire narrative is the race between the U.S. and China. OpenAI and Silicon Valley often frame their mission as a way to ensure democratic values prevail in global AI development.
But Hao points out that the opposite has occurred:
- The gap between the U.S. and China has continued to narrow.
- Far from liberalizing the world, Silicon Valley has illiberalized global discourse, spreading censorship, exploitation, and surveillance.
- The primary winner has been Silicon Valley itself, which consolidated unmatched wealth and influence.
OpenAI’s Dual Structure: Non-Profit vs. For-Profit
One of the most contentious aspects of OpenAI is its hybrid structure. Founded as a nonprofit with the mission of developing safe AGI for humanity, it later introduced a for-profit arm to attract capital and scale its models.
This dual structure blurs accountability. As Hao notes, the enjoyment people get from tools like ChatGPT is often cited as proof of “benefiting humanity.” Yet this conflates consumer satisfaction with societal good.
Two former OpenAI safety researchers told TechCrunch that they fear the lab has confused its missions—leaning toward profit while sidelining safety.
Karen Hao’s Warning: The Dangers of Ideological Blindness
Perhaps Hao’s most urgent warning in Empire of AI is about ideological blindness. When a company becomes so consumed by its mission that it ignores mounting evidence of harm, it risks causing the very destruction it sought to prevent.
“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over.”
This danger is not hypothetical. History is full of empires that justified exploitation in the name of progress or salvation. For Hao, OpenAI’s AGI project fits that same pattern.
Conclusion: A Choice Between Ideology and Responsibility
Karen Hao’s Empire of AI is more than a critique—it is a call to reflection. OpenAI and its peers have convinced the world that the race to AGI is inevitable, that only scale and speed can secure the future. Yet Hao reminds us that this path was not inevitable.
Alternative forms of AI exist – ones that would instead be safer, more efficient and genuinely beneficial. The question is will humanity continue to fuel an empire with ideology over-reality or will demand accountability, transparency and responsibility from those defining our collective future?’
In the end, the empire of AI is not just about OpenAI, Google or Meta. It is about us: our inclination to challenge stories, strike a balance between innovation and ethics, and to make sure that technology works for us – not the other way around.
FAQs
What is Karen Hao’s Empire of AI about?
The book critiques the rise of OpenAI and the global AI industry, framing them as an empire built on the ideology of artificial general intelligence. Hao explores how this empire concentrates power, prioritizes speed over safety, and creates global harms while promising nebulous benefits.
How does Karen Hao view OpenAI?
Hao sees OpenAI as both visionary and dangerous. While it has advanced AI development, it has also consolidated unprecedented economic and political power, shaping geopolitics and society in ways comparable to nation-states.
Why does Hao compare OpenAI to an empire?
Because, like historical empires, OpenAI expands based on ideology. Its commitment to AGI justifies massive spending, environmental costs, and social harms—all in the name of a mission that may never be fulfilled.
What alternative path does Hao propose?
She emphasizes that scaling isn’t the only way. The breakthroughs may be due to greater kernel performance, efficiency, or domain-specific AI such as AlphaFold that has real benefits without (or most often) generalized harm.