Good chance that AI will achieve Artificial General Intelligence in 5 years, says Google AI Boss

Good chance that AI will achieve Artificial General Intelligence in 5 years, says Google AI Boss

Oct 31, 2023 - 16:30
 0  25
Good chance that AI will achieve Artificial General Intelligence in 5 years, says Google AI Boss

More than a decade ago, Shane Legg, co-founder of Google’s DeepMind artificial intelligence lab, made a bold prediction that by the year 2028, artificial intelligence (AI) would have a 50-50 chance of being as smart as humans. In a recent interview with tech podcaster Dwarkesh Patel, Legg reiterated his belief in this forecast, which he initially made on his blog at the end of 2011.

This prediction holds significant weight, especially in light of the ever-increasing interest and investment in the field of AI. Sam Altman, CEO of OpenAI, has long championed the development of artificial general intelligence (AGI), a theoretical form of AI capable of performing intellectual tasks on par with humans, with the potential to benefit all of humanity. However, the achievement of AGI and the establishment of a universal definition for it remain uncertain challenges.

Legg’s journey towards his 2028 goalpost began as far back as 2001 when he read Ray Kurzweil’s groundbreaking book “The Age of Spiritual Machines.” Kurzweil’s book predicted a future where superhuman AIs would become a reality. Legg identified two critical points from Kurzweil’s work that he came to believe in: the exponential growth of computational power for decades and the exponential growth of global data. With these trends and the emergence of deep learning techniques to teach AI systems to process data like the human brain, Legg posited at the start of the last decade that AGI was attainable in the coming years, provided no major disruptions occurred.

In the present day, Legg acknowledges certain caveats to his prediction regarding the AGI era.

Legg notes that the definition of AGI is inherently linked to human intelligence, which is challenging to precisely define due to its complexity. He acknowledges that it’s impossible to have a comprehensive set of tests that encompass all aspects of human intelligence. Nonetheless, he suggests that if researchers could create a battery of tests for human intelligence, and an AI model performs exceptionally well across them, then it could be considered AGI.

The second caveat Legg highlights is the necessity to scale up AI training models significantly. This point is especially relevant in an era where AI companies are already consuming vast amounts of energy to develop large language models. Legg emphasizes the need to develop more scalable algorithms to handle the computational demands of AGI.

Legg’s assessment of our current progress towards AGI indicates that computational power has reached a level that could make it achievable. He identifies the “first unlocking step” as being the training of AI models on data of a scale beyond what a human could experience in a lifetime, a feat he believes the AI industry is ready to undertake.

Despite his optimism, Legg reiterates his personal belief that there is only a 50 per cent chance that researchers will achieve AGI before the end of this decade. His perspective offers a glimpse into the ongoing uncertainties and challenges that AI experts grapple with as they strive to reach the pinnacle of artificial intelligence.

(With input from agencies)

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow