Artificial intelligence that can rival humans in any task is still a long way off — but it’s only a matter of time before this becomes true, according to the CEO of Google DeepMind.
During a briefing at DeepMind’s offices in London on Monday, Demis Hassabis expressed his belief that artificial general intelligence (AGI) — which is as intelligent or more intelligent than humans — will likely begin to emerge in the next five to ten years.
“I think that the systems we have today are quite passive, but there are still many things they cannot accomplish. However, I believe that in the next five to ten years, many of these capabilities will begin to surface, and we’ll move closer to what we refer to as artificial general intelligence,” Hassabis remarked.
Hassabis described AGI as “a system capable of demonstrating all the complex abilities that humans possess.”
“We aren’t there yet. These systems are impressive in certain areas. Yet there are other tasks they still cannot perform, and we have significant research ahead of us before that happens,” Hassabis stated.
Hassabis is not alone in asserting that it will take time for AGI to materialize. The CEO of Chinese tech giant Baidu, Robin Li, mentioned last year that he believes AGI is “more than 10 years away,” countering some of his colleagues’ enthusiastic predictions about this breakthrough occurring much sooner.
There’s still a considerable amount of time ahead.
Hassabis’ prediction extends the timeline for achieving AGI compared to what some of his industry counterparts have suggested.
Dario Amodei, CEO of AI startup Anthropic, shared with CNBC at the World Economic Forum in Davos, Switzerland, in January that he foresees a type of AI that “surpasses nearly all humans in almost all tasks” appearing in the “next two or three years.”
Some tech leaders predict AGI could arrive even earlier. Cisco’s Chief Product Officer Jeetu Patel believes there’s a possibility that we could witness an example of AGI as soon as this year. “There are three main phases” to AI, Patel explained in a CNBC interview during the Mobile World Congress in Barcelona earlier this month.
“There’s the basic AI that we are all experiencing right now. Following that, there is artificial general intelligence, where cognitive abilities align with those of humans. Finally, there’s what is referred to as superintelligence,” Patel said.
“I anticipate we will observe significant evidence of AGI in action by 2025. We aren’t talking about years away,” he added. “I believe superintelligence is, at best, just a few years out.”
Artificial superintelligence, or ASI, is expected to emerge after AGI and to exceed human intelligence. Nonetheless, “no one truly knows” when such a landmark event will occur, Hassabis mentioned on Monday.
Last year, Tesla CEO Elon Musk anticipated that AGI would likely be available by 2026, while OpenAI CEO Sam Altman claimed that such a system could be developed in the “reasonably close future.”
What does it take to achieve AGI?
Hassabis indicated that the primary obstacle to attaining artificial general intelligence lies in advancing current AI systems to a stage where they can comprehend context from the real world.
While it has been feasible to create systems that can analyze problems and carry out tasks independently in gaming contexts — such as the intricate strategy board game Go — applying this technology to real-world scenarios remains challenging.
“The issue is how quickly we can generalize planning concepts and behaviors associated with agentic actions, planning, and reasoning, and then extend that to real-world applications, factoring in elements like world models — frameworks capable of understanding our environment,” Hassabis said.
“And I believe we’ve made significant strides with world models over the past couple of years,” he added. “Thus, the critical question is: what is the most effective way to merge that with these planning algorithms?”
Hassabis and Thomas Kurian, the CEO of Google’s cloud computing division, indicated that “multi-agent” AI systems represent a technological breakthrough that is increasingly gaining attention behind the scenes.
Hassabis noted that substantial efforts are underway to reach this milestone. One example he cited is DeepMind’s work in training AI agents to navigate the well-known strategy game “Starcraft.”
“We have conducted extensive research on that, working with games like Starcraft in the past, where a society of agents or a league of agents can either compete or cooperate,” DeepMind’s chief explained.
“When considering agent-to-agent communication, that’s also what we are engaged in — enabling an agent to articulate itself … What are your capabilities? What types of tools do you utilize?” Kurian said.
“Those are all the components necessary to interact with an agent, and once that interface is established, other agents can communicate with it,” he added.
What is artificial general intelligence (AGI)?
Artificial general intelligence (AGI) refers to the embodiment of generalized human cognitive capabilities within software, enabling the AGI system to derive solutions when confronted with unfamiliar tasks. The purpose of an AGI system is to execute any task that a human can perform.
There are varying definitions of AGI, as experts from different fields interpret human intelligence through diverse lenses. For instance, computer scientists often define human intelligence in terms of goal achievement, whereas psychologists typically characterize general intelligence by adaptability or survival mechanisms.
AGI is recognized as strong artificial intelligence (AI), which differs from weak or narrow AI, the latter being the application of artificial intelligence to specific tasks or problems. Examples of narrow AI include IBM’s Watson supercomputer, expert systems, and autonomous vehicles.
What can artificial general intelligence achieve? In the realm of computer science, AGI represents an intelligent system endowed with widespread or complete knowledge and cognitive processing abilities. As of now, no true AGI systems have been developed; they remain a concept of science fiction. The expected performance of such systems would be indistinguishable from that of a human, and their expansive intellectual capabilities would surpass those of humans due to their capacity to access and analyze massive data sets at extraordinary speeds.
A legitimate AGI should be capable of performing human-level tasks and abilities that no current computer can accomplish. While present AI can execute numerous tasks, it doesn’t reach the level of success that would classify it as human or general intelligence.
An AGI system should possess the following capabilities and understanding:
- Abstract reasoning.
- Background knowledge.
- Common sense.
- Understanding cause and effect.
- Ability for transfer learning.
Examples of AGI capabilities may include the following five:
1. Creativity: An AGI system would theoretically have the ability to read and understand human-written code and enhance it.
2. Sensory perception: AGI would excel at recognizing colors, a subjective perceptual skill, and navigate depth and three-dimensionality in static images.
3. Fine motor skills: For example, it would be able to retrieve keys from a pocket, requiring imaginative perception.
4. Natural language understanding (NLU): Given the context-dependent nature of human language, AGI systems would have an intuitive grasp of NLU.
5. Navigation: The existing Global Positioning System (GPS) is capable of identifying geographic locations. Once fully developed, AGI would surpass current systems in movement projection through physical environments.
AI researchers also anticipate that AGI systems will encompass higher-level capabilities, such as the ability to:
- Manage a variety of learning types and algorithms.
- Establish fixed structures for all tasks.
- Understand symbolic systems.
- Utilize different forms of knowledge.
- Comprehend belief systems.
- Engage in metacognition and employ metacognitive knowledge.
AGI vs. AI: What distinguishes the two? Current artificial intelligence capabilities are described as narrow AI in contrast to artificial general intelligence. While AGI is theoretical, narrow AI is actively utilized today.
Theoretically, AGI should perform any task that a human is capable of and demonstrate a range of intellectual strengths across various areas without human assistance. Its performance ought to match or surpass human abilities in solving problems in most domains.
On the other hand, weak AI excels in addressing specific tasks or problem types. Numerous existing AI systems integrate machine learning (ML), deep learning (a subset of machine learning), reinforcement learning, and natural language processing (NLP) for self-enhancing capabilities to resolve particular issues. Nevertheless, these technologies do not come close to the overall functionality of the human brain.
While genuine AGI systems are not currently available, there are examples of narrow AI systems that either match or exceed human capabilities in certain domains. Research in artificial intelligence is focused on these systems and potential developments regarding AGI in the future.
Here are some instances of these systems:
IBM’s Watson. Watson and other advanced supercomputers possess the capability to perform calculations beyond the capacity of typical computers. They combine their substantial computing power with AI to execute scientific and engineering tasks that were previously unfeasible, such as modeling the Big Bang theory or the human brain.
- Expert systems. These AI-driven systems replicate human decision-making abilities. They can suggest medical treatments based on patient information and forecast molecular structures, for example.
- Self-driving cars. These AI-operated vehicles identify other cars, pedestrians, and objects on the road while complying with traffic laws.
- ROSS Intelligence. ROSS is a legal expert system often referred to as the AI attorney. It can sift through data from about 1 billion documents, analyze that information, and deliver accurate answers to complex questions in less than three seconds.
- AlphaGo. This is another instance of narrow intelligence that excels in solving a specific type of problem. AlphaGo is a program designed to play the board game Go, which is a complex game challenging for humans to master. In 2016, AlphaGo defeated world champion Lee Sedol in a five-game series.
- Language model Generative Pre-trained Transformer. GPT-3 and GPT-4 are versions of a program from OpenAI that can generate human-like text automatically. This technology frequently emulates general human intelligence. In some cases, the produced text is indistinguishable from that of a human, though it can still be flawed.
- Music AIs. Dadabots is an AI algorithm that, when given a selection of existing music, can create a continuous stream that approximates that music.
If AGI were to be applied to some of the examples mentioned, it could enhance their effectiveness. For instance, self-driving cars still need a human present to make decisions in uncertain situations. This applies similarly to music generation algorithms, language models, and legal systems. These areas contain tasks that AI can automate, as well as those that necessitate a higher level of abstraction and human intelligence.
What lies ahead for AGI?
Numerous experts engaged in AI research are doubtful that AGI will ever become a reality. Some question its desirability.
Theoretical physicist, cosmologist, and author Stephen Hawking cautioned about the risks of AGI in a 2014 interview with the BBC. “The creation of complete artificial intelligence could signal the end of humanity,” he stated. “It would evolve independently and redesign itself at an accelerating pace. Humans, constrained by slow biological evolution, would be unable to compete and would ultimately be replaced.”
Certain AI specialists anticipate the ongoing advancement of AGI. In an interview at the 2017 South by Southwest Conference, inventor and futurist Ray Kurzweil predicted that computers will reach human levels of intelligence by 2029. Kurzweil has also forecasted that AI will evolve at an exponential rate, leading to breakthroughs that allow it to function at levels far beyond human understanding and control. This stage of artificial superintelligence is known as the singularity. Artificial general intelligence is one of the forms of AI that will lead to the eventual emergence of artificial superintelligence.
In 2022, this prospect became significantly more tangible, driven by advancements in generative AI that captivated the world. With the launch of ChatGPT in November 2022 and the arrival of other accessible generative AI interfaces, users globally encountered AI software capable of comprehending human text prompts and providing answers across an extensive array of subjects, although not always accurately. These generative AI models have showcased their ability to produce diverse types of content, from poetry and product descriptions to coding and synthetic data. Image generation systems like DALL-E are also transforming the visual domain, creating images that imitate the work of renowned artists or photographs, alongside medical images, 3D object models, and videos.
Despite their remarkable abilities, their known flaws and risks have become apparent to users, indicating they still do not achieve full autonomous AGI. Whether due to their tendency to produce inaccuracies and misinformation or their lack of access to current information, human oversight remains necessary to reduce potential societal harm.