Artificial Intelligence Isn't Going to Kill Everyone (At Least Not Right Away)

Debate over the risks and potential of AI technology is splitting into different camps.

Jun 14, 2023 - 02:30
 0  29
Artificial Intelligence Isn't Going to Kill Everyone (At Least Not Right Away)

Since ChatGPT made artificial intelligence a part of everyday life, there has been a lot of talk about the risks, harms and opportunities posed by the technology. 

Three different camps appear to be developing around the technology. The doomers are concerned with risks. The critics are concerned with harms and feasibility. The third, less easily named group is still attempting to weigh the cost benefit ratio of AI. 

DON'T MISS: Why ChatGPT Can't Turn Into Marvel Villain Ultron (Yet)

The Doomers are Worried

Not surprisigly, the group getting the most attention currently is the AI Doomers, the group of researchers, experts and executives that think AI could wipe out all of humanity. 

"For the first time ever, we might be driving ourselves extinct with the technology we're building," Max Tegmark, an MIT machine learning researcher said. "It's turned out to be much easier to build smarter-than-human intelligence than we thought."

The crux of this perspective -- which gained attention with a recent apocalyptic statement by the Center for AI Safety -- revolves around control: how can humans control a system that is smarter than humans? 

OpenAI CEO Sam Altman -- one of the more prominent corporate signatories on that recent statement -- has been sounding the alarm on the technology he is working so fervently to create. 

 "A misaligned superintelligent AGI could cause grievous harm to the world," he said

DON'T MISS: Human Extinction From AI is Possible, Developers Warn

This extinction threat is focused on the risks. It is largely unconcerned with current AI models like ChatGPT and is solely concerned with the possible lethal power of future systems. 

“If you keep building smarter systems, at some point you will have a system that can and will break out," AI researcher Connor Leahy said. 

This hypothetical, sci-fi situation of a superintelligent AI model destroying the human race is a prediction. Current large language models (like ChatGPT) are a long way away from human intelligence; the difficult part here is that no one really knows exactly how far away. 

But if that potential of artificial general intelligence -- AI that is smarter than humans --  is achieved, the damage it could cause is difficult to predict. And yes, potentially catastrophic.  

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the Center for AI Safety's statement reads. 

The Critics Are Skeptical 

The AI critics are skeptical of AI on a couple of fronts. Some don't think AGI will ever be possible, but many believe that the promotion of extinction risks is just a smokescreen to overlook current harms and control what regulation looks like.

"Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today," the Distributed AI Research Institute (DAIR) wrote

DON'T MISS: AI Companies Beg For Regulation

DAIR is much more concerned with issues like "1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem and 3) the concentration of power in the hands of a few people which exacerbates social inequities."

This group, rather than concerning itself with future risks, is concerned with present harm; job loss, misinformation, copyright infringement, etc. 

The Middle Needs More Data

But there is another group, though they don't have as convenient a name as the others. This camp concerns itself with current harms, is aware of future risks and is also aware of potential opportunities. 

Dr. Noah Giansiracusa, a professor of data science and mathematics at Bentley University, belongs to this group. 

"I don't think there could be a really compelling argument that AI is going to wipe out everyone, or that it is not," he said. "I think we don't know. I think it's premature to really have a strong take."

The extinction risks Giansiracusa has seen of a superintelligent AI setting off global pandemics or nuclear warfare don't seem greater than the odds of autocratic leaders causing the same eventualities. 

"Anytime there's a lot of change, which I think AI is causing, revolutions happen, people get put in power, people who know how to take advantage of the chaos," he said. "AI is scary, both in hyped-up, fanciful fears, but also in very real, practical, politic fears."

Giansiracusa doesn't think that future extinction is impossible, however. He thinks it is simply "too early to try to logically reason how AI would kill everyone because the AI we have now can't and won't."

DON'T MISS: Protesters Say OpenAI CEO Is Dangerously Misled When It Comes to AGI

He does think, though, that "fixating on the existential risk is harmful" because it can "blind us to the benefits" of these models. 

"If I really thought that this thing could wipe out all humanity, I probably would not be so open to the benefits," he said. But if things were taken down a notch, he added, everything becomes much more balanced; knowing these models might spread disinformation, and as a trade-off, might help cure cancer, the benefit-to-risk ratio falls significantly, making it more worthwhile to explore.

Giansiracusa said that we know the cost in lives of a single nuclear bomb, or of a single-degree increase in global temperatures. 

"But with AI, how many people have died directly from AI? Now, imagine AI that's 100 times smarter," he said. "How many people is that going to kill? I have no idea. What's the extrapolation based on? It's too much of a stretch. There's plenty of other stuff we can focus on."

Reframing AI Could Help in Understanding

In terms of dealing with this technology in a way that keeps the existential fear at bay, Giansiracusa suggested reframing it through two new perspectives.

"Think of it like social media 2.0. Instead of some new alien lifeform that we're learning how to live with, it'll just be a lot like social media but more. And yes, that's good and bad and a lot of change, but that takes some anxiety away. It makes it more familiar than 'we're building this superintelligent entity we don't know how to control.'"

Beyond that, Giansiracusa said he tries to think of AI as automation, rather than artificial intelligence. Humans have been automating forever; automation is the basis of technological evolution. 

Giansiracusa said that, over the last century or two of technology upgrades, humans are physically healthier than ever; infant mortality rates are down and economic productivity is up. 

But people are lonelier than ever. 

"I think we will discover new medicines, we'll live longer, but we probably are going to spend more time on our computers, and be a little bit more isolated and lean on chatbots rather than real friends," he said. "I don't think today is the best the world's ever been. I do think in most measures it's better than how it was. But there's something kind of unsatisfying about it. Going forward that'll be AI."

This next level of technological automation, spearheaded by AI, will help things to get better "on paper." 

"Companies will be more efficient. But we're also just going to have this feeling that something's kind of missing and inhuman and wrong," Giansiracusa said. 

"We've slowly been automating more and more aspects of our lives. AI is just automating more of the decision-making and the thinking and communication parts of it," Giansiracusa said. "But it's just a continuing technology, which is good and bad. It's not one or the other."

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow