Few editors would call someone “useless” in a headline. Fewer still would do it in an obituary. Yet when a former NBA basketball player collapsed and died this week, there it was, emblazoned on the MSN website: “Brandon Hunter useless at 42.”
Those who read further quickly realized that something was deeply wrong with the article; the anonymous editor who wrote it seemed almost, but not quite, totally unfamiliar with the way the English language works. “Hunter’s expertise led to his choice because the 56th general decide within the 2003 NBA Draft,” the obituary explained. “Throughout his NBA profession, he performed in 67 video games over two seasons and achieved a career-high of 17 factors in a recreation in opposition to the Milwaukee Bucks in 2004.”
Astute readers realized that the editor was likely a machine. “AI should not be writing obituaries,” wrote one outraged sports fan on X/Twitter. “Pay your damn writers @MSN.” Though the first reporters on the scene speculated that the obituary was “seemingly AI generated,” the truth is a bit more mundane. Indeed, the crudeness of the algorithm that embarrassed MSN shows just what makes modern media outlets so vulnerable to AI misinformation.
The computer program that generated the Brandon Hunter obituary is probably a relic rather than cutting-edge AI (through a spokesperson, MSN declined to answer questions). For more than a decade, unscrupulous website designers have been using software called “article spinners” to create novel-seeming content out of stolen words. At their simplest, these programs mask plagiarism through liberal use of a thesaurus; replace enough words with synonyms and hopefully nobody will ever find the original source.
The Brandon Hunter obit overindulged on the Roget’s, yet it’s still possible to find the original obituary, “Brandon Hunter dead at 42,” published on a small specialist website TalkBasket.net (which, in turn, is quite similar to this story from TMZ). “Hunter’s talent led to his selection as the 56th overall pick in the 2003 NBA Draft,” the article states. “During his NBA career, he played in 67 games over two seasons and achieved a career-high of 17 points in a game against the Milwaukee Bucks in 2004.” Compare that with the MSN version, and it becomes obvious how ham-handed—and simple—the spinner algorithm was.
Though any human editor would instantly throw such an article in the digital trash, over the past week, MSN has published dozens of these plagiarized-and-synonymized articles about such diverse subjects as sports (“[Manchester United player Jadon] Sancho was affected by an harm for a interval of the five-month stretch from October when he didn’t characteristic for United.”), auto-buying advice (“Nevertheless, presuming the funds permits just one, we might begin by discounting the primary two generations, as a result of they’re now nicely stricken in years, so to discover a good one means procuring very fastidiously.”), and business (“Normal Motors’ first wage-and-benefit supply to the United Auto Staff on Thursday fell far in need of the union’s preliminary calls for.”) Until the “useless” headline sparked outrage, nobody at MSN seemed to realize that their news page was larded with gobbledygook. (All these articles and numerous others have since been removed.)
The article spinner that hit MSN is mere decades-old computer wrangling, not modern machine learning. Modern AI—such as ChatGPT—is so good at grammar and syntax, in contrast, that it can write faster and better than many human editors. And the plagiarism that these AI algorithms partake in is so subtle that it outdoes plagiarism in the ordinary sense: it takes other people’s work and synthesizes sources in ways typically untraceable.
Still, AI can’t have novel insights, nor can it generate new information that isn’t already fed into its electronic brain. However, it can craft an extremely convincing facsimile of news.
When I asked ChatGPT to write an obituary for Hunter, for example, the prose was grammatically clean. Sterile, even. Absent of any new information, and so full of cliches that it could never offend anyone, even by accident. “His prowess, tenacity, and charismatic personality left an indelible mark on the game and on those who had the privilege of watching him play....” the algorithm disgorged. “He established the Brandon Hunter Foundation, a charitable organization aimed at providing opportunities for underprivileged youth through sports and education.”
Spoiler alert, there ain’t no such foundation. This is a much more sophisticated fraud than the thesaurus-wielding article spinner. But at its core, the threat from AI is the same as the threat from the article spinner—a future where misinformation drowns out reality. Both generate near infinite variations of the information they’re fed, excreting thousands upon thousands of words of novel-seeming prose that contains nothing new whatsoever. Both can satisfy any news outlet’s desire, along with advertisers, to fill up our eyeballs with seemingly fresh content. Both can generate enough “news” to fill up the biggest news hole on the planet a million times over. And both are essentially free. It’s tempting for any website seeking to convert audience attention into dollars. And that’s what makes modern media sites so vulnerable.
News outlets have experimented with publishing machine-generated work even before sophisticated machine-learning algorithms arrived. Yet none of that computer-generated news, even that created by the most cutting-edge AI, is truly new as much as it is a remix of information gathered by human beings—and human beings generally have the temerity to want to be paid for their work. Worse, human beings, expensive as they are, are the only way to tell the difference between true and false information.
It’s now easy—and cheap—to flood the Internet with information-free content that mimics real news. That means curation is increasingly vital to screening out nonsense. But as fakes become more sophisticated, that role becomes more difficult. All that leaves media outlets vulnerable to transmitting misinformation at viral speed. In other words, MSN faces the same dilemma that Facebook and ex-Twitter face: The moment you attempt to aggregate huge amounts of information without a good system of (human) curation capable of handling such large volume, you start becoming a vector for garbage.
Seemingly in a losing battle, and an expensive one, news outlets may be tempted to save a few bucks by giving up entirely and choosing universal aggregation over careful curation. A few years ago, MSN began using algorithms rather than journalists to curate its homepage. But algorithms, even cutting-edge AI, won’t come to the rescue. Sure, ChatGPT is extremely sophisticated, but it can’t find fakes; it takes a good curator to detect that there is no such thing as the Brandon Hunter Foundation. One can check IRS publication 78, or look for Form-990 filings, or state charitable registrations, or corporate articles of organization—but there’s nothing there. A likely fake.
There is, however, one online reference to this foundation that might give any fact-checker pause. It comes from an obituary of Brandon Hunter on what appears to be a news site, Kanwasinews9: “His charitable sports went beyond the basketball floor. He set up the Brandon hunter foundation, a non-profit employer dedicated to improving the lives of deprived children thru sports, schooling, and training tasks,” it says. “Thru his foundation, he made a difference inside the lives of many kids by giving them the hazard to be successful and the direction they deserved to achieve this.”
Useless.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.