A new book tackles AI hype – and how to spot it

In AI Snake Oil, two computer scientists set us straight on the power and limits of AI and offer advice for moving forward.

Sep 3, 2024 - 22:30
 0  14
A new book tackles AI hype – and how to spot it

AI Snake Oil explores the power and limits of AI for the time being and right throughout the long-term

A hand manikin rests on a strip of yellow plastic caution tape, to concentrate on the ought to proceed with caution when using or implementing Generative Synthetic Intelligence

The authors of AI Snake Oil suggest proceeding with caution when using or implementing AI.

Kilito Chan/Getty Images

Book Cover of AI Snake Oil by Arvind Narayanan and Sayash Kapoor

AI Snake Oil
Arvind Narayanan and Sayash Kapoor
Princeton Univ., $24.95

Some months ago, I was once working on a bit of writing about oceans across the solar system. Having read my fill about oceans of water, I turned to Google for a handy guide a rough refresher on oceans fabricated from other stuff, liquid hydrocarbons, as an illustration. For better or worse, I searched “oceans right throughout the solar system now not water.” I sought a reliable link, maybe from NASA. As a replacement, Google’s AI Overviews feature served up Enceladus as one suggestion. This Saturn moon is known for its subsurface sea — of saltwater. I shut my laptop in frustration.

That’s one small example of how AI fails. Arvind Narayanan and Sayash Kapoor compile dozens of others in their new book, AI Snake Oil — many with consequences a ways more pertaining to than irking one science journalist. They write about AI tools that purport to predict academic success, the likelihood somebody will commit a criminal offense, disease risk, civil wars and welfare fraud (SN: 2/20/18). Along the manner, the authors weave in tons of different issues with AI, covering misinformation, an absence of consent for images and other training data, false copyright claims, deepfakes, privacy and the reinforcement of social inequities (SN: 10/24/19). They address whether we should always be frightened of AI, concluding: “We ought to be a ways more thinking about what people will do with AI than with what AI will do on its own.”

The authors acknowledge that the technology is advancing quickly. A range the small print is likely to be out of date — or at the very least old news — by the purpose the book makes it into your hands. And clear discussions about AI should handle an absence of consensus over ways to define key terms, including the which technique of AI itself. Still, Narayanan and Kapoor squarely achieve their stated goal: to empower people to distinguish AI that works well from AI snake oil, which they define as “AI that doesn't and can't work as advertised.”

Narayanan is a laptop scientist at Princeton University, and Kapoor is a Ph.D. student there. The idea for the book turn into conceived when slides for a conversation Narayanan gave in 2019 titled “Easy easy tips on tips to acknowledge AI snake oil” went viral. He teamed up with Kapoor, who turn into taking a course that Narayanan turn into teaching with one more professor on the boundaries of prediction in social settings.

The authors take direct aim at AI which is ready to allegedly predict future events. “Which is miles on this arena that almost all AI snake oil is targeted,” they write. “Predictive AI now not simplest does now not work for the time being, but will likely never work, resulting from the inherent difficulties in predicting human behavior.” They also devote an extended chapter to the reasons AI can't solve social media’s content moderation woes. (Kapoor had worked at Facebook helping to create AI for content moderation.) One challenge is that AI struggles with context and nuance. Social media also tends to encourage hateful and dangerous content.

The authors are a chunk more generous with generative AI, recognizing its value if used smartly. But in a section titled “Automating bullshit,” the authors note: “ChatGPT is shockingly good at sounding convincing on any that you may imagine topic. But there's likely to be no source of truth right through training.” It’s now not just that the training data can contain falsehoods — the data are mostly web text after all — but also that this system is optimized to sound natural, now not necessarily to possess or verify knowledge. (That explains Enceladus.)

I’d add that an overreliance on generative AI can discourage important thinking, the human quality on the very heart of this book.

When it involves why these problems exist and the way you may vary them, Narayanan and Kapoor bring a transparent perspective: Society has been too deferential to the tech industry. Better regulation is critical. “We do not seem to be okay with leaving the long-term of AI as much as the people currently in charge,” they write.

This book is a worthwhile read whether you make policy decisions, use AI right throughout the place of job or just spend time looking out online. It’s a convincing reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it.


Buy AI Snake Oil from Book shop.org. Science News is a Book shop.org affiliate and can earn a commission on purchases crafted from links on this text.

More Stories from Science News on Synthetic Intelligence

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow