Skip to main content

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks

Illustration of people walking and their faces being recognized by AI.
Credit:

Hannah Perry

Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.

Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Nevertheless, in May the nonprofit Center for AI safety released a statement—co-signed by hundreds of industry leaders, including OpenAI’s CEO Sam Altman—warning of “the risk of extinction from AI,” which it asserted was akin to nuclear war and pandemics. Altman had previously alluded to such a risk in a Congressional hearing, suggesting that generative AI tools could go “quite wrong.” And in July executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail “the most significant sources of AI risks,” hinting at existential threats over real ones. Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”

The broader public and regulatory agencies must not fall for this science-fiction maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in order to understand its detrimental effects here and now.

Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.

With OpenAI’s release of ChatGPT (and Microsoft’s incorporation of the tool into its Bing search) in late 2022, text synthesis machines have emerged as the most prominent AI systems. Large language models such as ChatGPT extrude remarkably fluent and coherent-seeming text but have no understanding of what the text means, let alone the ability to reason. (To suggest so is to impute comprehension where there is none, something done purely on faith by AI boosters.) These systems are instead the equivalent of enormous Magic 8 Balls that we can play with by framing the prompts we send them as questions such that we can make sense of their output as answers.

Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.

Nevertheless, the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few.

In addition to not really helping those in need, deployment of this technology actually hurts workers: the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place.

Second, the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.

Finally, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.

AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity (the property that a test measures what it purports to measure).

Some recent remarkable examples include a 155-page preprint paper entitled “Sparks of Artificial General Intelligence: Early Experiments with GPT-4” from Microsoft Research—which purports to find “intelligence” in the output of GPT-4, one of OpenAI’s text synthesis machines—and OpenAI’s own technical reports on GPT-4—which claim, among other things, that OpenAI systems have the ability to solve new problems that are not found in their training data.

No one can test these claims, however, because OpenAI refuses to provide access to, or even a description of, those data. Meanwhile “AI doomers,” who try to focus the world’s attention on the fantasy of all-powerful machines possibly going rogue and destroying all of humanity, cite this junk rather than research on the actual harms companies are perpetrating in the real world in the name of creating AI.

We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

A version of this article with the title “Theoretical AI Harms Are a Distraction” was adapted for inclusion in the February 2024 issue of Scientific American.

Alex Hanna is director of research at the Distributed AI Research Institute. She focuses on the labor building the data underlying AI systems and how these data exacerbate existing racial, gender and class inequality.
More by Alex Hanna
Emily M. Bender is a professor of linguistics at the University of Washington. She specializes in computational linguistics and the societal impact of language technology.
More by Emily M. Bender
Scientific American Magazine Vol 330 Issue 2This article was originally published with the title “Theoretical AI Harms Are a Distraction” in Scientific American Magazine Vol. 330 No. 2 (), p. 69