Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused

Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused

Apr 6, 2023 - 21:30
 0  27
Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused

As amazing and awe-inducing ChatGPT and other generative large language models are, they certainly spell danger for some people. AI bots often hallucinate and in their hallucinations make things up and present them as facts. AI bots hallucinating also often present completely made-up resources to legitimise their claims. In one such case, ChatGPT vert nearly ruined a law professor’s life, accusing him of being a sexual predator.

OpenAI’s chatbot ChatGPT has claimed that a law professor in the United States has a habit of harassing and attempting to physically attack his students. The chatbot alleges that the professor assaulted one of his students on a field trip his class had taken. The chatbot referenced a Washington Post story from 2018 as source. The kicker? No such report was ever written.

Falsely accused
Legal scholar Jonathan Turley got to know last week that he was on a ChatGPT-generated list of legal scholars who had sexually assaulted or harassed their students and others, with a Washington Post report cited as a source.

Also read: When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer

On Wednesday, The Washington Post verified that the cited story did not exist. Not only that, it was later discovered that the field trip where this alleged assault took place, never happened. Prof. Turley refuted ever being accused of sexual harassment. He labelled the event “incredibly harmful.”

Similar instances where chatbots generate coherent and believable responses that are found to be partially or entirely false, also known as ‘hallucinations’ is becoming more and more common as the usage of AI bots becomes more common.

Hallucinations are becoming more common
As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bing, and Google’s Bard become more widely used, their tendency to generate potentially harmful falsehoods raises concerns about the spread of misinformation,  as well as new questions about who is to blame when chatbots mislead.

“Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods,” said Kate Crawford, an Annenberg professor and senior principal researcher at Microsoft Research to The Washington Post. 

Also read: ChatGPT sued: Australian Mayor to sue OpenAI in world’s first defamation lawsuit against AI

In a completely separate instance, Brian Hood, the regional mayor of Hepburn Shire in Australia, has threatened to sue OpenAI for the first time unless it corrects false claims that he had spent time in prison on charges of bribery, something that never happened.

Do these instances pose a legal threat to OpenAI, Microsoft and Google?
Long ago, when the Internet went mainstream, Congress passed Section 230, which protects online services from liability for material produced by third parties, such as website commenters or social app users. However, experts are divided on whether tech firms will be able to invoke that shield if they are attacked for material generated by their own AI chatbots.

Libel cases must demonstrate not only that something untrue was said, but also that its publishing caused real-world harms, such as expensive reputational injury. This would almost certainly necessitate someone not only seeing a fraudulent assertion produced by a chatbot, but also fairly believing and acting on it.

“Companies may get a pass on saying things that are false, but not causing enough damage to warrant a lawsuit,” said Shabbi S. Khan, an associate at the intellectual property law company Foley & Lardner.

According to Volokh, it’s simple to imagine a world where chatbot-powered search engines cause havoc in people’s personal relationships.

He believes it would be detrimental if people looked for others in an improved search engine before a job interview or date and received fake information backed up by credible, but wrongly produced, proof.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow