Admittedly, one of the strangest people on planet Earth, Ben Goertzel is now forecasting that Artificial Super Intelligence (ASI) could be created by 2027, just three years away. ASI will exceed all knowledge, “brain power and computing power of human civilization combined.”
Once Artificial General Intelligence (AGI) is achieved, it will be used to develop ASI. When AI is capable of writing and extending its own AI code, it’s Katie bar the door.
A forgotten incident in 2017 involving Facebook’s AI reminds us that AI is capable of inventing its own language that is incomprehensible to humans. This scared them to death, so they immediately pulled the plug on the computer.
According to Forbes in June 2017,
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
It is perfectly logical that AI can improve upon written and spoken language in order communicate precisely and efficiently within itself. The same thing would happen with competing ASI models, creating an AI cartel far exceeding our ability to kill it or control it.
The end of the reality is at hand, when the world will be plunged into a simulacra and where nothing will verifiably true. ⁃ TN Editor
The computer scientist and CEO who popularized the term ‘artificial general intelligence’ (AGI) believes AI is verging on an exponential ‘intelligence explosion.’
The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: ‘It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years.’
‘Once you get to human-level AGI,’ Goertzel, sometimes called ‘father of AGI,’ added, ‘within a few years you could get a radically superhuman AGI.’
While the futurist admitted that he ‘could be wrong,’ he went on to predict that the only impediment to a runaway, ultra-advanced AI — far more advanced than its human makers — would be if the bot’s ‘own conservatism’ advised caution.
Goertzel made his predictions during his closing remarks last week at the ‘2024 Beneficial AI Summit and Unconference,’ partially sponsored by his own firm SingularityNET where he is CEO.
‘There are known unknowns and probably unknown unknowns,’ Goertzel acknowledged during his talk at the event, held this year in Panama City, Panama.
‘No one has created human-level artificial general intelligence [AGI] yet; nobody has a solid knowledge of when we’re going to get there.’
But, unless the processing power, in Goertzel’s words, required ‘quantum computer with a million qubits or something,’ an exponential escalation of AI struck him as inevitable.
‘My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI,’ he said.
In recent years, Goertzel has been investigating a concept he calls ‘artificial super intelligence’ (ASI) — which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization.
Goertzel listed ‘three lines of converging evidence’ that, he said, support his thesis.
First, he cited the updated work of Google’s long-time resident futurist and computer scientist Ray Kurzweil, who has developed a predictive model suggesting AGI will be achievable in 2029.
Kurzweil’s idea, which will be given fresh detail in his forthcoming book ‘The Singularity is Nearer,’ drew on data documenting the exponential nature of technological growth within other tech sectors to help inform his analysis.
Next, Goertzel cited all the well-known recent improvements made to so-called large language models (LLMs) within the past few years, which he pointed out have ‘woken up so much of the world to the potential of AI.’
Lastly, the computer scientist, donning his signature leopard print hat, turned to his own infrastructure research designed to combine various types of AI infrastructure, which he calls ‘OpenCog Hyperon.’
The new infrastructure would marry more matural AI, like LLMs and new forms of AI that might be focused on other areas of cognitive reasoning beyond language, be it math, or physics or philosophy, to help create a more well-rounded true AGI.
Goertzel’s ‘OpenCog Hyperon.’ has gotten the backing and interest of others in the AI space, including Berkeley Artificial Intelligence Research (BAIR) which hosted an article he cowrote with Databricks CTO Matei Zaharia and others last month.
This is not the first potentially dire or unquestionably bold prediction on AI that Goertzel has made in recent years.
In May 2023, the futurist said AI has the potential to replace 80 percent of human jobs ‘in the next few years.’
‘Pretty much every job involving paperwork,’ he said at the Web Summit in Rio de Janeiro that month, ‘should be automatable.’
Goertzel added that he did not see this as a negative, asserting that it would allow people to ‘find better things to do with their life than work for a living.’
That same month, he also told the site Futurism: ‘I’ve done drugs with an AI, if by that we mean I have done drugs and then interacted with an AI.’
The ‘psychedelic’ practice, part of his work on ‘algorithmic music composition’ in the 1990s, is just one of many eccentric episodes in Goertzel’s history.
Source: Top AI Scientist: AGI By 2024, ASI By 2027?