Human Extinction From AI is Possible, Developers Warn

AI leaders across the board keep pointing to the existential threat of the technology they're creating.

May 31, 2023 - 02:30
 0  31
Human Extinction From AI is Possible, Developers Warn

Dr. J. Robert Oppenheimer, the scientific leader of the Manhattan Project to develop the atomic bomb, once famously described witnessing its first successful test.

He said that it made him recall a line from Hindu scripture: “Now, I am become Death, the destroyer of worlds.” He added that "I suppose we all thought that one way or another."

DON'T MISS: Why ChatGPT Can't Turn Into Marvel Villain Ultron (Yet)

Now, leading artificial intelligence experts and developers seem to be uttering a similar sentiment. 

The Center for AI Safety released a single-sentence statement May 30, signed by hundreds of people, from AI experts to developers including OpenAI's Sam Altman, likening the risks of AI to nuclear warfare. 

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement reads. 

And though, to some, the intention behind highlighting this paramount risk is a good one, to other experts, it serves as little more than a shield behind which AI developers can crouch, all while ignoring the real, though far less dramatic, risks posed by current AI models. 

What Does This Extinction Threat Look Like? 

No one has a clear picture of what these existential threats posed by AI might look like. These risks, though, are not posed by current AI models like ChatGPT. They are posed by a hypothetical referred to as artificial general intelligence, or AGI, which is a proposed AI system that would be smarter than humans.

The risk, which has been repeated ad nausea, is a future potential. ChatGPT, in its current form, poses a whole host of dramatic, though not necessarily terminal threats, such as misinformation, online fraud, and mass propaganda.

If AGI were to be achieved, the biggest point of concern is in how to control something that is smarter than people. 

"Few things are harder to predict than the ways in which someone much smarter than you might outsmart you," computer scientist Paul Graham tweeted. "That's the reason I worry about AI. Not just the problem itself, but the meta-problem that the ways the problem might play out are inherently hard to predict."

It is the fear of this lack of control that lies behind such catastrophic fears of AI; the argument is that an out-of-control AI model could make a bunch of science fiction very real, in ways no one yet understands. 

For many experts, it simply comes down to preparation for systems that do not yet exist, but might soon. 

"It is a technology that is possibly more consequential than any weapon that we've ever developed in human history," AI expert Professor John Licato said. "So we absolutely have to prepare for this. That's one of the few things that I can state unequivocally."

There's Reality, and Then There's Hype

And while many experts seem to agree that diving into this with eyes open wide to the risks is not a bad thing, many have dismissed the statement. 

"I think it conveniently provides cover to a range of signatories who want to keep developing, deploying, and profiting from AI in harmful/risky ways because they can say they are doing so in efforts to steer AI away from the mirage of existential risk," machine-learning researcher Dr. Noah Giansiracusa Tweeted

"The framing of 'let's mitigate risks,' sets AI up as an inevitable thing we have to learn to deal with. It's not," he added. "If you actually thought it could wipe out humanity (I don't, I think that fear is baseless) then don't f*cking develop it!"

For other experts, like Dr. Serafim Batzoglou, such statements amount to little more than "fear-mongering" that could give current AI leaders an enormous regulatory moat.

"AGI fear-mongering is overhyped, toxic, likely to lead to regulatory capture by incumbents, and can slow down or hinder the positive applications of AI across society including biological science and medicine," he Tweeted

Some, though, including the co-founder and CEO of Google DeepMind Demis Hassabis, maintain that the technology is an important one, and it should be approached with caution. 

"As with any transformative technology, we should apply the precautionary principle, and build & deploy it with exceptional care," he tweeted.

To Regulate, Or Not To Regulate

The statement comes as the latest step in the conversation on AI regulation. OpenAI CEO Sam Altman appeared before a congressional hearing on AI oversight just a few weeks ago, saying then that "regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models."

OpenAI has since released a proposal on how to regulate the industry, something Microsoft later expanded on. 

Meanwhile, the EU is working on an AI Act, a piece of proposed legislation that had Altman threatening to cease operations in Europe if it became law as-is. He later walked that back a bit given that it doesn't quite jibe with his repeated statements stressing the importance of regulation. 

"Mitigating AI risk should absolutely be a top priority, but literal extinction is just one risk, not yet well-understood; many other risks threaten both our safety and our democracy," AI expert Professor Gary Marcus tweeted

"We need a balanced approach, confronting a broad portfolio of risks both short-term and long."

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow