OpenAI CEO Sam Altman scared that ChatGPT could be used for large-scale ‘disinformation’ campaigns

OpenAI CEO Sam Altman scared that ChatGPT could be used for large-scale ‘disinformation’ campaigns

Mar 21, 2023 - 09:30
 0  29
OpenAI CEO Sam Altman scared that ChatGPT could be used for large-scale ‘disinformation’ campaigns

As wonderful and powerful OpenAI’s Natural Language Processing AI bots are, they are also capable of being very destructive, if not used the right way. And it seems that OpenAI’s CEO Sam Altman is woefully aware of this. 

Altman expressed concern that ChatGPT could be ‘used for large-scale disinformation and cyberattacks if some checks and balances are not set up by people who are using their GPT-4’s API  

Sam Altman recently spoke with ABC News about the company’s chatbot and the release of the AI language model’s newest version, GPT-4.

“AI taking over the world”
While the chatbot has raised concerns about AI world dominance, Altman believes that people are the biggest danger to the technology.

“There will be others who do not adhere to some of the safety restrictions that we impose,” he told ABC News.

Also read: OpenAI believed GPT-4 could take over the world, so they got it tested to see how to stop it

“Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it,” Altman added. OpenAI launched GPT-4 last week, touting it as more powerful than its predecessor – so much that it could be ‘harmful.’

The firm cautioned that the model is still susceptible to ‘hallucinating’ incorrect facts – and that it can be manipulated to produce deceptive or harmful content.

“The thing that I try to caution people the most is what we call the ‘hallucinations’ problem,” Altman said. 

“The model will boldly assert made-up things as if they were completely true,” he continued.

The need to beef up cybersecurity
Altman acknowledged during the interview that GPT-4 is “not perfect,” but it can create computer code in most programming languages and pass the Uniform Bar Exam in the 90th percentile.

Altman, on the other hand, says he cannot ignore the fact that GPT-4 may fall into rogue hands of people who will abuse its capabilities. 

“I’m especially concerned that these models could be used for widespread misinformation,” Altman told ABC NEWS. 

Also read: The Future of AI: Everything you need to know about GPT-4 and how it will impact apps like ChatGPT

“They could be used for offensive cyberattacks now that they’re growing stronger at writing computer code,” he said 

AI and Human jobs
He also addressed another widespread concern: AI taking over human jobs.

Altman isn’t blind to the reality that technology will eventually replace some jobs, and he thinks it will happen quicker than we imagine.

“I believe humanity has demonstrated over a number of centuries to adapt wonderfully to significant technological shifts,” Altman said. “However, if this occurs in a single-digit number of years, some of these shifts… That is the portion I am most concerned about” he added. 

Also read: e-Cain and e-Abel: ChatGPT’s deranged cousin DAN-GPT breaks all OpenAI’s rules on sexual, illicit content

OpenAI creator Greg Brockman said this month at SXSW that worries of AI tools stealing people’s employment were exaggerated, and that AI would free up humans to concentrate on more important tasks. “The most essential factor will be judgement and understanding when to dig into the details, said, adding, “In my opinion, the true narrative here amplifies what people can do.”

GPT-4 is now accessible to customers via ChatGPT whereas free users will continue to use GPT-3.5.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow