AI more dangerous than nuclear war, will cause extinction, warns ChatGPT maker, top researchers

AI more dangerous than nuclear war, will cause extinction, warns ChatGPT maker, top researchers

May 31, 2023 - 21:30
 0  29
AI more dangerous than nuclear war, will cause extinction, warns ChatGPT maker, top researchers

A group comprising leading AI researchers, engineers, and CEOs has expressed concerns about the existential threat posed by AI to humanity. In a concise statement of 22 words, they emphasize the need to prioritize global efforts in mitigating the risks associated with AI, comparable to addressing other major societal risks like pandemics and nuclear war.

It reads:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The statement, released by the Center for AI Safety, a non-profit based in San Francisco, has garnered support from influential figures including CEOs of Google DeepMind and OpenAI, Demis Hassabis and Sam Altman respectively, as well as Geoffrey Hinton and Yoshua Bengio, two of the three recipients of the 2018 Turing Award.

Notably, the third recipient, Yann LeCun, the chief AI scientist at Meta, the parent company of Facebook, has yet to sign the statement.

This declaration represents a notable contribution to the ongoing and contentious debate surrounding AI safety. Earlier this year, a group of signatories, including some of those who have now supported the concise warning, signed an open letter advocating for a six-month “pause” in AI development.

However, this letter received criticism from various perspectives. While some experts believed it exaggerated the risks posed by AI, others agreed with the concerns but disagreed with the suggested solution.

Dan Hendrycks, the executive director of the Center for AI Safety, explained that the brevity of the recent statement, which did not propose specific measures to mitigate the AI threat, aimed to avoid such disagreements.

He emphasized that the intention was not to present an extensive list of thirty potential interventions, as that would dilute the message.

Hendrycks characterized the statement as a collective acknowledgement from industry insiders who worry about the risks associated with AI. He highlighted the misconception that only a few individuals express concerns about these matters within the AI community, while in reality, many privately share apprehensions.

Although the fundamental aspects of this debate are well-known, the specifics can be lengthy and revolve around hypothetical scenarios in which AI systems rapidly advance in capabilities, potentially compromising safety.

Supporters of this viewpoint often cite the rapid progress observed in large language models as evidence of future gains in intelligence. They argue that once AI systems reach a certain level of sophistication, it might become impossible to control their actions.

However, there are those who doubt these predictions. They point out the inability of AI systems to handle even relatively mundane tasks such as driving a car.

Despite years of dedicated research and substantial investments, fully autonomous vehicles remain far from reality. Sceptics question the technology’s ability to match all other human accomplishments in the years to come if it struggles with this specific challenge.

In the present day, both proponents and sceptics of AI risk acknowledge that AI systems pose several threats, independent of further advancements. These threats include enabling mass surveillance, powering flawed “predictive policing” algorithms, and facilitating the spread of misinformation and disinformation.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow