Superintelligence not a ‘sci-fi risk’ and possible in ‘next decade’, says Sam Altman

Superintelligence not a ‘sci-fi risk’ and possible in ‘next decade’, says Sam Altman
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

OpenAI’s Sam Altman, CEO, and Ilya Sutskever, Chief Scientist, recently participating in an interview in Tel Aviv, conversing about the risks that AI might pose. Perhaps the most significant piece of information was dropped by the man considered to be the god-father of AI: He said superintelligence should not be looked at as a “sci-fi risk … but something we may have to confront in the next decade”.

In light of the recent petitions signed by Sam Altman and other major scientists, it’s no surprise that the next couple of years of human existence are going to be filled with huge technological advancements. There’s a clear emphasis on the need for regulation of AI, which surely spans from the higher-ups realising its risks. Altman suggesting that we could have to ‘confront’ a superintelligence in the next decade doesn’t bode well, especially considering Ilya Sutskever’s previous comments that it would be a “mistake” to develop a superintelligence that we don’t have the capabilities to “control”.

The OpenAI CEO’s comments don’t necessarily state that we’re definitely going to see an artificial superintelligence in the next decade, but that it’s a mere possibility. Regardless, it’s clear that Altman is wary that it could happen.

An artificial superintelligence, or ASI, is a form of AI with capabilities far beyond that of a human being, and is one of the many possibilities inducing fear in tech communities. In fact, it’s such a widespread concern that industry leaders have warned that we must prevent a “risk of extinction from AI” through regulation.

What may be reassuring, though, is that GPT-5 doesn’t seem to be “close to the start” of development, according to Altman. Recently touring Delhi, the OpenAI CEO has confirmed that “more safety audits” need to take place before the next GPT model can begin development, according to the Economic Times. Without a doubt, GPT-5 would take us closer to an ASI, or at the very least an AGI (Artificial General Intelligence).

Despite the doom and gloom of a possible superintelligence – Altman also reiterated that an ASI could help us understand the “mysteries of the universe” alongside helping mankind solve climate change.

You can watch the full interview here.

Cover image credited to TAUVOD.