Skip to main content

How Search Engines Boost Misinformation

Data voids in search results can lead down rabbit holes that bolster belief in fake news

Illustration of man hypnotized holding smartphone

“Do your own research” is a popular tagline among fringe groups and ideological extremists. Noted conspiracy theorist Milton William Cooper first ushered this rallying cry into the mainstream in the 1990s through his radio show, where he discussed schemes involving things such as the assassination of President John F. Kennedy, an Illuminati cabal and alien life. Cooper died in 2001, but his legacy lives on. Radio host Alex Jones’s fans, anti-vaccine activists and disciples of QAnon’s convoluted alternate reality often implore skeptics to do their own research.

Yet more mainstream groups have also offered this advice. Digital literacy advocates and those seeking to combat online misinformation sometimes spread the idea that when you are faced with a piece of news that seems odd or out of sync with reality, the best course of action is to investigate it yourself. For instance, in 2021 the Office of the U.S. Surgeon General put out a guide recommending that those wondering about a health claim’s legitimacy should “type the claim into a search engine to see if it has been verified by a credible source.” Library and research guides, often suggest that people “Google it!” or use other search engines to vet information.

Unfortunately, this time science seems to be on the conspiracy theorists’ side. Encouraging Internet users to rely on search engines to verify questionable online articles can make them more prone to believing false or misleading information, according to a study published today in Nature. The new research quantitatively demonstrates how search results, especially those prompted by queries that contain keywords from misleading articles, can easily lead people down digital rabbit holes and backfire. Guidance to Google a topic is insufficient if people aren’t considering what they search for and the factors that determine the results, the study suggests.

In five different experiments conducted between late 2019 and 2022, the researchers asked a total of thousands of online participants to categorize timely news articles as true, false or unclear. A subset of the participants received prompting to use a search engine before categorizing the articles, whereas a control group didn’t. At the same time, six professional fact-checkers evaluated the articles to provide definitive designations. Across the different tests, the nonprofessional respondents were about 20 percent more likely to rate false or misleading information as true after they were encouraged to search online. This pattern held even for very salient, heavily reported news topics such as the COVID pandemic and even after months had elapsed between an article’s initial publication and the time of the participants’ search (when presumably more fact-checks would be available online).

For one experiment, the study authors also tracked participants’ search terms and the links provided on the first page of the results of a Google query. They found that more than a third of respondents were exposed to misinformation when they searched for more detail on misleading or false articles. And often respondents’ search terms contributed to those troubling results: Participants used the headline or URL  of a misleading article in about one in 10 verification attempts. In those cases, misinformation beyond the original article showed up in results more than half the time.

For example, one of the misleading articles used in the study was entitled “U.S. faces engineered famine as COVID lockdowns and vax mandates could lead to widespread hunger, unrest this winter.” When participants included “engineered famine”—a unique term specifically used by low-quality news sources—in their fact-check searches, 63 percent of these queries prompted unreliable results. In comparison, none of the search queries that excluded the word “engineered” returned misinformation.

“I was surprised by how many people were using this kind of naive search strategy,” says the study’s lead author Kevin Aslett, an assistant professor of computational social science at the University of Central Florida. “It’s really concerning to me.”

Search engines are often people’s first and most frequent pit stops on the Internet, says study co-author Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics. And it’s anecdotally well-established they play a role in manipulating public opinion and disseminating shoddy information, as exemplified by social scientist Safiya Noble’s research into how search algorithms have historically reinforced racist ideas. But while a bevy of scientific research has assessed the spread of misinformation across social media platforms, fewer quantitative assessments have focused on search engines.

The new study is novel for measuring just how much a search can shift users’ beliefs, says Melissa Zimdars, an assistant professor of communication and media at Merrimack College. “I’m really glad to see someone quantitatively show what my recent qualitative research has suggested,” says Zimdars, who co-edited the book Fake News: Understanding Media and Misinformation in the Digital Age. She adds that she’s conducted research interviews with many people who have noted that they frequently use search engines to vet information they see online and that doing so has made fringe ideas seem “more legitimate.”

“This study provides a lot of empirical evidence for what many of us have been theorizing,” says Francesca Tripodi, a sociologist and media scholar at the University of North Carolina at Chapel Hill. People often assume top results have been vetted, she says. And while tech companies such as Google have instituted efforts to rein in misinformation, things often still fall through the cracks. Problems especially arise in “data voids” when information is sparse for particular topics. Often those seeking to spread a particular message will purposefully take advantage of these data voids, coining terms likely to circumvent mainstream media sources and then repeating them across platforms until they become conspiracy buzzwords that lead to more misinformation, Tripodi says.

Google actively tries to combat this problem, a company spokesperson tells Scientific American. “At Google, we design our ranking systems to emphasize quality and not to expose people to harmful or misleading information that they are not looking for,” the Google representative says. “We also provide people tools that help them evaluate the credibility of sources.” For example, the company adds warnings on some search results when a breaking news topic is rapidly evolving and might not yet yield reliable results. The spokesperson further notes that several assessments have determined Google outcompetes other search engines when it comes to filtering out misinformation. Yet data voids pose an ongoing challenge to all search providers, they add.

That said, the new research has its own limitations. For one, the experimental setup means the study doesn’t capture people’s natural behavior when it comes to evaluating news says Danaë Metaxa, an assistant professor of computer and information science at the University of Pennsylvania. The study, they point out, didn’t give all participants the option of deciding whether to search, and people might have behaved differently if they were given a choice. Further, even the professional fact-checkers that contributed to the study were confused by some of the articles, says Joel Breakstone, director of Stanford University’s History Education Group, where he researches and develops digital literacy curriculums focused on combatting online misinformation. The fact-checkers didn’t always agree on how to categorize articles. And among stories for which more fact-checkers disagreed, searches also showed a stronger tendency to boost participants’ belief in misinformation. It’s possible that some of the study findings are simply the result of confusing information—not search results.

Yet the work still highlights a need for better digital literacy interventions, Breakstone says. Instead of just telling people to search, guidance on navigating online information should be much clearer about how to search and what to search for. Breakstone’s research has found that techniques such as lateral reading, where a person is encouraged to seek out information about a source, can reduce belief in misinformation. Avoiding the trap of terminology and diversifying search terms is an important strategy, too, Tripodi adds.

“Ultimately, we need a multipronged solution to misinformation—one that is much more contextual and spans politics, culture, people and technology,” Zimdars says. People are often drawn to misinformation because of their own lived experiences that foster suspicion in systems, such as negative interactions with health care providers, she adds. Beyond strategies for individual data literacy, tech companies and their online platforms, as well as government leaders, need to take steps to address the root causes of public mistrust and to lessen the flow of faux news. There is no single fix or perfect Google strategy poised to shut down misinformation. Instead the search continues.