Karen Hao on the Empire of AI, AGI evangelists, and the price of perception | TechCrunch

Date:

On the heart of each empire is an ideology, a perception system that propels the system ahead and justifies growth – even when the price of that growth instantly defies the ideology’s said mission.

For European colonial powers, it was Christianity and the promise of saving souls whereas extracting sources. For as we speak’s AI empire, it’s synthetic normal intelligence to “benefit all humanity.” And OpenAI is its chief evangelist, spreading zeal throughout the business in a method that has reframed how AI is constructed. 

“I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling writer of “Empire of AI,” advised TechCrunch on a latest episode of Fairness

In her ebook, Hao likens the AI business typically, and OpenAI specifically, to an empire. 

“The only way to really understand the scope and scale of OpenAI’s behavior…is actually to recognize that they’ve already grown more powerful than pretty much any nation state in the world, and they’ve consolidated an extraordinary amount of not just economic power, but also political power,” Hao stated. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you can only describe it as an empire.”

OpenAI has described AGI as “a highly autonomous system that outperforms humans at most economically valuable work,” one that may one way or the other “elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” 

These nebulous guarantees have fueled the business’s exponential development — its large useful resource calls for, oceans of scraped knowledge, strained power grids, and willingness to launch untested techniques into the world. All in service of a future that many specialists say could by no means arrive.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Hao says this path wasn’t inevitable, and that scaling isn’t the one option to get extra advances in AI. 

“You can also develop new techniques in algorithms,” she stated. “You can improve the existing algorithms to reduce the amount of data and compute that they need to use.”

However that tactic would have meant sacrificing velocity. 

“When you define the quest to build beneficial AGI as one where the victor takes all — which is what OpenAI did — then the most important thing is speed over anything else,” Hao stated. “Speed over efficiency, speed over safety, speed over exploratory research.”

Picture Credit:Kim Jae-Hwan/SOPA Pictures/LightRocket / Getty Pictures

For OpenAI, she stated, one of the simplest ways to ensure velocity was to take present methods and “just do the intellectually cheap thing, which is to pump more data, more supercomputers, into those existing techniques.”

OpenAI set the stage, and moderately than fall behind, different tech corporations determined to fall in line. 

“And because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, then you have an entire discipline now being shaped by the agenda of these companies, rather than by real scientific exploration,” Hao stated.

The spend has been, and will likely be, astronomical. Final week, OpenAI stated it expects to burn by means of $115 billion in money by 2029. Meta stated in July that it could spend as much as $72 billion on constructing AI infrastructure this 12 months. Google expects to hit as much as $85 billion in capital expenditures for 2025, most of which will likely be spent on increasing AI and cloud infrastructure. 

In the meantime, the objective posts preserve shifting, and the loftiest “benefits to humanity” haven’t but materialized, even because the harms mount. Harms like job loss, focus of wealth, and AI chatbots that gasoline delusions and psychosis. In her ebook, Hao additionally paperwork staff in creating international locations like Kenya and Venezuela who have been uncovered to disturbing content material, together with baby sexual abuse materials, and have been paid very low wages — round $1 to $2 an hour — in roles like content material moderation and knowledge labeling.

Hao stated it’s a false tradeoff to pit AI progress towards current harms, particularly when different types of AI provide actual advantages.

She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is skilled on amino acid sequence knowledge and sophisticated protein folding buildings, and may now precisely predict the 3D construction of proteins from their amino acids — profoundly helpful for drug discovery and understanding illness.

“Those are the types of AI systems that we need,” Hao stated. “AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms … because it’s trained on substantially less infrastructure. It does not create content moderation harms because [the datasets don’t have] all of the toxic crap that you hoovered up when you were scraping the internet.” 

Alongside the quasi-religious dedication to AGI has been a story in regards to the significance of racing to beat China within the AI race, in order that Silicon Valley can have a liberalizing impact on the world. 

“Literally, the opposite has happened,” Hao stated. “The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world … and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.”

After all, many will argue that OpenAI and different AI corporations have benefitted humanity by releasing ChatGPT and different giant language fashions, which promise big features in productiveness by automating duties like coding, writing, analysis, buyer assist, and different knowledge-work duties. 

However the best way OpenAI is structured — half non-profit, half for-profit — complicates the way it defines and measures its impression on humanity. And that’s additional difficult by the information this week that OpenAI reached an settlement with Microsoft that brings it nearer to finally going public.

Two former OpenAI security researchers advised TechCrunch that they worry the AI lab has begun to confuse its for-profit and non-profit missions — that as a result of individuals get pleasure from utilizing ChatGPT and different merchandise constructed on LLMs, this ticks the field of benefiting humanity.

Hao echoed these issues, describing the risks of being so consumed by the mission that actuality is ignored.

“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over,” Hao stated. “There’s something really dangerous and dark about that, of [being] so wrapped up in a belief system you constructed that you lose touch with reality.”

Share post:

Subscribe

Latest Article's

More like this
Related

Sora’s first week on iOS within the US was practically as massive as ChatGPT’s | TechCrunch

After OpenAI’s video-generating app Sora surged to the No....

Amazon Pharmacy is launching merchandising machines for prescribed drugs | TechCrunch

Amazon introduced on Wednesday that it’s debuting prescription merchandising...