Karen Hao on the rise of AI as empire and the public's role in curbing its power
The first journalist to ever profile OpenAI explains the societal impact of the company’s rise in her new book.
When journalist Karen Hao first reported on artificial intelligence, the then-nonprofit OpenAI wasn’t a household name. It was still quite idealistic in its pursuits, fervently claiming it wanted to save us all through a technology that was “developed safely and its benefits distributed evenly to the world,” Hao wrote in 2019.
Fast forward to 2025, AI has dominated the digital world — with OpenAI the tip of the spear in what the MIT Technology Review dubs the “new world order.” With this expedient rise, Hao has become one of the most incisive voices covering the technology, leveraging her background as both a reporter and former application engineer to write for the likes of The Wall Street Journal, MIT Technology Review, and The Atlantic.
Hao’s new book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”, traces the company's transformation from a well-meaning research lab to something of stratospheric proportions. In her New York Times bestseller, she takes readers inside data centers and boardrooms, across continents, and deep into class divisions to reveal what it will take to build these systems to the furthest extent possible.
In the latest edition of Depth Perception, Hao explains why OpenAI and its cadre of bedfellows are the new power brokers of empire, and how her book decodes how it was built, who it will serve, and how easy it is to mistake morally ambiguous ambition for benevolence. — Kelly Kimball
You anchor “Empire of AI” by comparing today's AI industry to a modern-day colonial order. Can you explain that metaphor?
I use this term, "empire of AI", to refer to the companies like OpenAI, Anthropic, [and] Google that have really taken a particular… path of AI development that is "scale at all costs." They're using these massive supercomputers that require land to build, energy to power, fresh water to cool, and they also hire a lot of workers to clean and parse through the data, to do content moderation of the models and so on. The reason why I say they're empires is because they basically follow four different features that were also true of empires of old:
First, they lay claim to resources that aren't theirs. All of that data that they're training on is not actually their data, even though they like to pretend that it's in the public domain. [Second,] they engage in labor exploitation. Workers doing data cleaning work in terrible conditions for a few dollars an hour, sometimes doing traumatizing work. [Third,] they have a monopoly on knowledge production. Most top AI researchers have shifted from academia to these companies because of million-dollar compensation packages. It's effectively the equivalent of all climate scientists working for oil and gas companies.
And the last feature is that empires always engage in this narrative that there are evil empires and good empires, and that they — the good empire — need to be an empire and do all this resource extraction, do all this labor exploitation, because they need to be strong to beat back the evil empire. Throughout OpenAI's history, the evil empire shifts. Originally it was Google. Increasingly, it has become China. But there's always some kind of enemy that justifies and provides cover for this consolidation of resources, wealth, and power.
As the “good empire,” what they're ultimately doing is civilizing the world. They're doing this to bring progress and modernity to all of humanity, and unless people allow them to just continue seizing and exploiting and plundering, they will not ultimately be able to give this gift to everyone. That is why I say that, essentially, we are seeing the recreation of a new form of empire.
When violence escapes the web’s echo chambers
Violence moving from the web into the world is not unprecedented, it was predicted. As Garrett Graff reports on Long Shadow: Breaking the Internet, the web's inventor, Sir Timothy Berners-Lee saw this coming—in 1997.
Chronicling innovations, revolutions, cyber attacks, and meltdowns, the latest season of Long Shadow untangles the web in a way you’ve never considered before. Across seven episodes, the podcast retraces 30 years of web history — a tangle of GIFs, blogs, apps, and hashtags — to answer the bewildering question many ask when they go online today: “How did we get here?”
The full season of Long Shadow: Breaking the Internet is out now. Listen and subscribe wherever you get your podcasts.
Your reporting takes readers to Kenya, Chile, and Colombia. Why was it important to include voices from the Global South?
U.S. tech reporting gets caught up in San Francisco, but these are global technologies. The greatest harms usually fall on communities fundamentally different from Silicon Valley. I wanted to examine how vulnerable communities grapple with this technology, because that tells us how it will impact all of us.
I went to Kenya because I met workers contracted by OpenAI to develop content moderation filters. OpenAI realized [that] if they dropped text generation tools into millions of users' hands, they needed filters to prevent hate speech and toxic content. They hired Kenyan workers to wade through reams of the worst text on the internet, creating detailed taxonomies — "this is violent content, this is sexual abuse involving children" — so filters could granularly block different content types.
Those workers became deeply traumatized. Their families unraveled, communities unraveled, and they were paid a few bucks an hour. Meanwhile, AI researchers get million-dollar compensation packages. You cannot rationally justify why AI researchers deserve millions while these workers doing the worst work get a couple bucks an hour.
The only justification is philosophical — that there are superior and inferior people, and superior people have a God-given right to rule over inferior people.
What challenges remain in ensuring legislative bodies are well-informed about AI's societal impacts? And beyond legislative bodies, who else has yet to have a seat at the table when discussing AI’s impacts?
I used to say regulation was what we needed, but in the current U.S. political environment, that's not possible. There's literally a proposal in the Republican tax bill that would ban AI regulation at the state level for the next 10 years. A decade-long moratorium on a fast-growing technology is absolutely wild. (Editors note: The proposal did not ultimately make it into the Big Beautiful Bill that was signed into law.)
One of the things I hope to do with this book is to highlight to people that anyone can have a seat at the table. Because AI has a supply chain, there's labor, land, energy, [and] water these companies need. … and these resources and spaces are collectively owned. The data is our data. The land [sought out by AI data centers] is owned by communities.
All these supply chain parts are sites of democratic contestation. You probably [interact] with many: Maybe you're a parent whose kid's school is considering AI policy, [or] you work for a company considering AI policy, [or] you live near a data center or place considering one. These are places where you can reclaim ownership and assert what you want to see.
If you don't like companies taking this data, don't use their tools. If you don't like AI deployment in schools, form parent-teacher-student coalitions and collectively discuss ground rules. Have open debate and contest when it's not going the way you want.
“U.S. tech reporting gets caught up in San Francisco, but these are global technologies. The greatest harms [caused by AI] usually fall on communities fundamentally different from Silicon Valley.” —Karen Hao
Could you give a case study about physical resource extraction and where this is manifesting most aggressively?
What's not fully appreciated is the speed at which data centers and supercomputers are being built. Early drafts for OpenAI's next generation supercomputers suggested one should use the same power as the entire city of New York.
After I finished the book, President Trump announced the Stargate Initiative — $500 billion over four years for what OpenAI says will be its next supercomputer. [For context,] the Apollo Program spent $300 billion of today's dollars over 13 years. So $500 billion over four years is an order of magnitude more, just for one supercomputer.
McKinsey reported that at current data center expansion pace, in five years we'll need to add two to six times California's annual energy consumption to the global grid. Most will be serviced through fossil fuels, because data centers run 24/7 and can't just run on renewable energy.
We're seeing coal plants' lives extended when they should be retired, massive methane gas turbines pumping air pollutants into communities. It's a climate issue, a public health issue, and exacerbates the freshwater crisis because these data centers need to be cooled using fresh water; and it has to be fresh water because any other type of water corrodes the equipment or leads to bacterial growth.
Two-thirds of the data centers that are being built right now are explicitly going into water-scarce areas, communities that are already struggling to meet the demand for fresh drinking water. [In my book,] I highlight Uruguay's capital, Montevideo, during a historic drought. It got so bad that the Montevideo city government started putting toxic wastewater into the public drinking water supply, just to have something come out of people's taps when they turned them on. And those people who were too poor to buy bottled drinking water just had to drink that toxic water. In the middle of that, Google said, "Let's build a data center that uses fresh water in this community."
So we’re talking about dire humanitarian crises that can happen because people are not getting access to the resources they need, and data centers are prioritized instead. That is the physical reality of this technology.
You’ve spent years covering this space. Do you think it’s still possible to build AI responsibly, or has the race for dominance made that dream obsolete?
I strongly advocate for task-specific, smaller model AI technologies. There's harm from scale, but also from perpetuating the idea that these companies are building "everything” machines that can do anything for anyone. What happens is these machines can do some things for some people, but the general public thinks they can use them for anything.
These tools can't accurately provide medical information, but people use them for medical questions and get harmful misinformation. OpenAI researchers tell me they can't test for all possible uses, so they deploy first and see how it breaks down, using the population as guinea pigs, experimenting on people including vulnerable populations and children.
But if you move to a task-specific approach, where you are deploying these technologies for a specific problem in a very well-scoped environment where there is a clear boundary for when the tool should or shouldn’t be used, not only is that more clear to the consumer in terms of how to use it most effectively … but it’s also better for the company. They can more clearly test in advance all the possible ways it could break down, so that when it’s deployed, they’re not just running experiments on the population anymore, but actually delivering a beneficial product.
Further reading from Karen Hao
“‘Terrified’ federal workers are clamming up” (The Atlantic, February 21, 2025)
“The foundations of America’s prosperity are being dismantled” (MIT Technology Review, February 21, 2025)
“Microsoft’s hypocrisy on AI” (The Atlantic, September 13, 2024)
“AI is taking water from the desert” (The Atlantic, March 1, 2024)
“Cleaning Up ChatGPT Takes Heavy Toll on Human Workers” (The Wall Street Journal, July 24, 2023)





