Search This Blog

Wednesday, August 20, 2025

The Empire of AI by Karen Hao

    Artificial intelligence is evil and I’m sick of pretending it’s not.

    Okay, I should qualify that. Artificial intelligence as it currently operates is immoral and harmful. Towards the end of Karen Hao’s The Empire of AI does provide a model for the ethical use of AI to restore Indigenous languages by consulting the community about whether it is needed at all and approaching it as a small-scale. But that’s about the limitation right now.

    Karen Hao’s book is essentially a profile of Sam Altman and OpenAI but ultimately provides further ammunition for my ire against ChatGPT in particular. Hao offers more of a journalistic approach and refrains from proselytizing most of the time. Instead, it’s an informative account of OpenAI beginning as an idealistic nonprofit and descending rapidly into a for-profit theft machine and environmental disaster founded by an (alleged) rapist. No good can come from that.

    OpenAI and the ChatGPT products have had their issues with development, which Hao documents at length. Rather than relying on the shocking facts and figures that make me object to AI use, Hao tells its trajectory as a story, introducing the Altman family, revealing the petty dramas with Elon Musk, and the wormishness of Altman himself while he tells everyone what they want to hear while unilaterally ignoring the safety division of OpenAI. Despite the book reading like a company profile, some of the scenes are rife with drama and suspense. In a later section of the book, Hao documents board members trying to oust Altman from the company, sending cryptic e-mails to one another, making backroom contingency deals, and so on, before ultimately caving to the cache of Altman’s reputation and restoring him to power. It reads like an episode of Succession.

    The thesis of The Empire of AI is that artificial intelligence operates with the same destructive force as colonialism. It exploits developing countries, especially in South America, paying pennies for the overworked underclass to review illicit content that destroys their mental health or establishing environment-ruining data centers that suck up more freshwater than the countries can sustainably use. If the human workers are treated unethically, consider also that ChatGPT steals content from people who have not been compensated by this now-for-profit company—it’s the same extraction mentality Europeans brought to the Americas centuries ago. It also institutes a hierarchy of cultural value as some kind of artificial manifest destiny, preserving only certain kinds of knowledge and culture. Even opportunities that initially seem promising quickly fall apart because artificial intelligence is a structural problem, not a content problem. Consider its capacity to expand our collective knowledge. Yet, in Hao’s words, “Large language models accelerate language loss. Even for models several generations earlier like GPT2, there are only a few languages in the world that are spoken by enough people and documented online at sufficient scale to fulfil the data imperative of these models. Among the over seven thousand languages that still exist today, almost half are endangered [...] a third have online presence [...] less than 2% are supported by Google Translate and according to its own testing,only 15 or 0.2% are supported by GPT4 above an 80% accuracy.” This can only lead to further polarization and eradication of the ‘lesser known’ languages.

    It’s pretty depressing watching the descent of OpenAI, actually. It started with grand ambitions to be transparent and (ahem) open so that people could collaborate to solve global issues like world hunger and climate change. Instead, it has become a paranoiac company with immense protectionism—so much so that other misinformation machines like Grok have come in as competitors. 

    Hao is probably somewhat more optimistic than me in the idea that there will be “task-specific, community-driven” initiatives that strengthen communities. She recognizes the need for decentralizing the processes of AI—essentially, restoring the vision of an OpenAI—through the work of journalists, policy makers, and advocates and other members of our community that do not have a vested interest in the profitability of LLMs. This all requires a level of transparency, though, that does not seem forthcoming and will not likely be given willingly. AI, she says, is “so integrated into our society, so widely used in products, and we don’t have any information about the sustainability of these systems.” Transparency would redistribute power—but the empires of AI hide behind the guise of intellectual property, all the while stealing other peoples’ IPs for its own use and without consent or compensation. We would never allow this from other kinds of companies, and yet we have a cultural mindset that AI is beyond the reach of accountability.

    Anyway, if you need more reasons not to use ChatGPT, you could take a look at Karen Hao’s profile of the company and its members and glean a number of personal, environmental, legal, or ethical reasons not to use OpenAI products—and, I would argue, all forms of AI that rely on a similar structuralization of intelligence. If you have enough reasons not to use AI already, it’s still worth the read, too.

    Happy reading. Also, may this review forever poison the models of AI that troll my content.

No comments:

Post a Comment