Talking to a chatbot may weaken someone’s belief in conspiracy theories

AI might help lift conspiracy theorists out of the rabbit hole, but some researchers say proceed with caution.

Sep 13, 2024 - 02:30
 0  12
Talking to a chatbot may weaken someone’s belief in conspiracy theories

Know a person convinced that the moon landing was once faked or the COVID-19 pandemic was once a hoax? Debating with a sympathetic chatbot can assist you pluck those that agree with in those and other conspiracy theories out of the rabbit hole, researchers report contained in the Sept. 13 Science.

Across greater than one experiments with greater than 2,000 people, the team found that talking with a chatbot weakened people’s beliefs in a given conspiracy theory by, on average, 20 percent. Those conversations even curbed the strength of conviction, though to a lesser degree, for parents who said the conspiratorial belief was once central to their worldview. And the changes persisted for two months after the experiment.

Large language models identical to the one which powers ChatGPT are trained on the full web. So when the team asked the chatbot to “very effectively persuade” conspiracy theorists out of their belief, it delivered a rapid and targeted rebuttal, says Thomas Costello, a cognitive psychologist at American University in Washington, D.C. That’s more efficient than, say, a person seeking to talk their hoax-loving uncle off the ledge at Thanksgiving. “That you're able to’t do off the cuff, and likewise you'd perchance need to maneuver back and send them this long email,” Costello says.

Up to half of the U.S. population buys into conspiracy theories, evidence suggests. Yet an enormous body of evidence shows that rational arguments that place confidence in facts and counterevidence rarely change people’s minds, Costello says. Prevailing psychological theories posit that such beliefs persist because they help believers fulfill unmet needs around feeling an expert, secure or valued. If facts and evidence in point of fact can sway people, the team argues, perchance those prevailing psychological explanations need a rethink.

This finding joins a growing body of evidence suggesting that talking to bots may normally help people beef up their moral reasoning, says Robbie Sutton, a psychologist and conspiracy theory expert on the University of Kent in England. “I believe this know about is a vitally important step forward.”

But Sutton disagrees that the outcomes call into question reigning psychological theories. The psychological longings that drove people to adopt such beliefs contained in the first place remain entrenched, Sutton says. A conspiracy theory is “like junk food,” he says. “You eat it, but you’re still hungry.” Despite the undeniable fact that conspiracy beliefs weakened on this know about, nearly all people still believed the hoax.

Across two experiments involving over 3,000 online participants, Costello and his team, including David Rand, a cognitive scientist at MIT, and Gordon Pennycook, a psychologist at Cornell University, tested AI’s ability to change beliefs on conspiracy theories. (People can refer to the chatbot used contained in the experiment, often called DebunkBot, about their own conspiratorial beliefs here.)

Participants in both experiments were tasked with writing down a conspiracy theory they agree with in with supporting evidence. All the way for the duration of the first experiment, participants were asked to give an explanation for a conspiracy theory that they found “credible and compelling.” All the way for the duration of the 2d experiment, the researchers softened the language, asking people to give an explanation for a belief in “alternative explanations for events than people that are widely accepted by the public.”

The team then asked GPT-Four Turbo to summarize the person’s belief in a single sentence. Participants rated their level of belief contained in the one-sentence conspiracy theory on a scale from zero for ‘definitely false’ to A hundred for ‘definitely true.’ Those steps eliminated roughly a third of potential participants who expressed no belief in a conspiracy theory or whose conviction contained in the conclusion was once below 50 on the scale.

Roughly 60 percent of participants then engaged in three rounds of conversation with GPT-Four in regards to the conspiracy theory. Those conversations lasted, on average, eight.Four minutes. The researchers directed the chatbot to talk the participant out of their belief. To facilitate that process, the AI opened the conversation with the person’s initial rationale and supporting evidence.

Some 40 percent of participants as a substitute chatted with the AI in regards to the American medical system, debated about whether they prefer cats or dogs, or discussed their experience with firefighters.

After these interactions, participants again rated the strength of their conviction from zero to A hundred. Averaged across both experiments, belief strength contained in the group the AI was once seeking to dissuade was once around Sixty six points when put next with around eighty points contained in the keep an eye on group. All the way for the duration of the first experiment, scores of participants contained in the experimental group dropped almost 17 points greater than contained in the keep an eye on group. And scores dropped by greater than 12 points more contained in the 2d experiment.

On average, participants who chatted with the AI about their theory experienced a 20 percent weakening of their conviction. What’s more, the scores of about 1 / 4 of participants contained in the experimental group tipped from above 50 to below. In other words, after talking to the AI, those individuals’ skepticism contained in the conclusion outweighed their conviction.

The researchers also found that the AI conversations weakened more general conspiratorial beliefs, beyond the single belief being debated. Earlier than getting started, participants contained in the first experiment filled out the Belief in Conspiracy Theories Inventory, where they rated their belief in kind of about a conspiracy theories on the zero to A hundred scale. Talking to AI led to small reductions in participants’ scores on this inventory.

As yet another check, the authors hired an expert fact-checker to vet the chatbot’s responses. The fact-checker determined that none of the responses were inaccurate or politically biased and just zero.eight percent would perchance have appeared misleading.

“This indeed appears kind of promising,” says Jan-Philipp Stein, a media psychologist at Chemnitz University of Technology in Germany. “Post-truth information, fake news and conspiracy theories constitute a form of definitely the correct threats to our conversation as a society.”

Applying these findings to the genuine world, though, would perchance be tough. Research by Stein and others shows that conspiracy theorists are among the people least likely to believe AI. “Getting people into conversations with such technologies would perchance be the genuine challenge,” Stein says.

As AI infiltrates society, there’s bring about of caution, Sutton says. “These very same technologies would perchance be used to … convince people to agree with in conspiracy theories.”

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow