AGI explained: Artificial intelligence with humanlike cognition

5gDedicated

As artificial intelligence, particularly generative AI, gains traction in the business world after years of promise, a new generation of AI is starting to emerge — at least in the hype cycle. It’s not agentic AI, it’s not robotic AI, or physical AI. It is artificial general intelligence, or AGI.

Two years ago, fear of AGI run amok prompted 1,000 tech leaders and AI researchers to sign an open letter calling for a pause on new AI model rollouts. That pause obviously didn’t happen, nor has AGI arrived.

AI company leaders such as OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei have said AGI is imminent. Other AI experts are more conservative, with estimates ranging from “within the next five to ten years” to “decades” to “never.”

Confused? Join the club.

So what is AGI?

Interestingly, Altman himself has recently soured on the phrase, saying artificial general intelligence is “not a super useful term” because people use it to mean different things. But it’s most commonly defined as AI that can match human cognitive abilities. According to its proponents, AGI will be able to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or greater than that of a human being.

In other words, it will be able to think for itself.

Gartner defines AGI as AI that can autonomously learn and adapt in pursuit of predetermined and/or novel goals, according to Marty Resnick, VP analyst in the emerging trends and technologies group. “It can do new things, it can create new things, come up with new ideas, and meet or exceed the cognitive abilities of human beings,” he said.

With significantly expanded capacity to make decisions and take action, advocates say, AGI — sometimes referred to as “strong AI” — is everything traditional AI — called “narrow AI” — is not:

Traditional AI requires human intervention, at least to start. AGI takes initiative.

Traditional AI has a single use case, such as a specific task. AGI does many things.

Traditional AI does what it is programmed to do; even with machine learning it sticks to that one trick that it knows. AGI has cognitive flexibility and adaptability on par with human intelligence and is capable of reasoning and problem solving.

AI has been a work in progress literally for decades and has made considerable strides forward in recent years. However, AI tools are still single-use, lacking self-awareness and context as well as the ability to reason, all hallmarks of human intelligence. If AI can drive a car, AGI can drive a car, repair it, wash it, and register it, supporters say.

“Most AI today is task-specific,” said Deborah Golden, US chief innovation officer with Deloitte. “It can chat, label images, write code, but only within what it’s been trained to do. AGI is different in that it can learn across tasks, transfer knowledge, and solve unfamiliar problems.”

While some researchers and industry watchers use the term AGI to signify any AI system that meets or exceeds human cognitive capabilities, others say AGI is merely an interim step on the path to artificial superintelligence (ASI) — AI systems that far surpass human intelligence. Regardless, AGI is still a ways off, and ASI even further.

The promise — and dangers — of AGI

AGI believers say its potential uses are practically without limit. One frequently cited example is autonomous scientific and medical research.

Traditional AI is trained on specific tasks, and it requires both human initiative and intervention. Proponents say AGI could act as a self-directed researcher capable of independently generating, testing, and refining scientific knowledge across domains, thinking abstractly, and drawing insights from unrelated fields.

It could begin by analyzing vast amounts of existing literature, experimental data, and theoretical models to identify gaps in current understanding and areas where research could be done or improved. Recognizing patterns, anomalies, and gaps in the data, along with identifying relationships and correlations, it would use abductive reasoning — inferring the most likely explanation — to propose plausible new hypotheses.

It would then design and conduct experiments autonomously, not only running simulations but controlling the equipment and robotics to carry out such experiments. It would examine and interpret the results, then refine or form new hypotheses and run further experiments.

Every step of the way, AGI could interpret complex, ambiguous data, recognizing patterns and anomalies that humans or traditional AIs might miss. For example, it could refine existing models by first noticing, then removing or updating false, inaccurate, or biased data used in large language models. This paves the way to self-improvement, where AGI agents could learn how to be better scientists, improving their own methods and tools.

It could also engage in cross-lab collaboration, where AGI agents could find other researchers (or research agents) working on similar projects and connect with them to share data, avoid duplication, and develop insights faster.

The aspects of AGI that are the most promising — acting independently, improving itself, and cooperating with other AI agents — are also the most alarming. Just recently, OpenAI models not only refused to shut down when researchers told them to, they actively sabotaged the scripts trying to shut them down. And researchers in China found that models from OpenAI, Anthropic, Meta, DeepSeek, and Alibaba all showed self-preservation behaviors including blackmail, sabotage, self-replication, and escaping containment.

These are generative AI models in use now. If today’s genAI models rebel like sentient beings fighting for survival, imagine the damage that AGI could do with its advanced functionality, command and control capabilities, and reach.

Key technologies required for AGI

Advanced machine learning will be necessary to go beyond the general, single-use intelligence we have now to encompass the continual learning, meta-learning, and unsupervised learning that AGI requires. Which means AGI will demand significant advances in technological capability.

“We’re not going to get there from today’s technology,” said Golden. “Today’s technology … does amazingly great things, and it’s going to continue to excel, by the way. But what AGI aims to do is to have cross-domain capability, where, again, you can think about understanding the principles of both astrophysics [and] poetry as easily as if you understood one singularly.”

That means the large language models (LLMs) being built and tuned today will be useless in the brave new world of AGI. Golden notes that while LLMs are being trained, they are learning within the silos that they are built for and not cross-platform like AGI is supposed to be.

“AGI will require completely new algorithmic breakthroughs, new architectures that we don’t know yet,” said Raj Yavatkar, chief technology officer with Juniper Networks.

He cites the need for causal world models, which go beyond observation, pattern-matching, and prediction to incorporate cause-and-effect reasoning and decision-making.

“A World Model enables AI to develop a structured, dynamic understanding of its environment, capturing relationships, rules, and causal links,” a team of AI researchers from universities in Spain and Austria wrote in a recent paper. “With such a model, AI can reason about cause and effect, simulate future outcomes, and refine understanding through real-world interactions.”

Juniper’s Yavatkar noted that these models must be built from very high-dimensional data (a term used for data sets with a large number of features, variables, or attributes). For causal world models, “that means millions of dimensions, probably continuous real-world data,” he said.

Another fundamental change from AI to AGI is inferencing. Right now, LLMs go through a two-step process of training and inference, with the former being analyzing enormous data sets for patterns and correlations, and the latter being applying that information to new inputs or data.

Training requires significant compute power, usually relying on data center GPUs running at petaflop levels for days, weeks, or even months and consuming gigawatts of power. Inference has much lower power requirements on a per instance basis, but every query to ChatGPT or Gemini means an inference process being run. While each query may have a lower power draw than training, multiply that by the number of queries each service receives on a daily basis, and inference also becomes a massive power draw.

Because AGI will (in theory) be constantly learning and adapting, inferencing could potentially consume exponentially more resources. “The real-world data is consuming and using continuously, which is a different level and scale of computational power,” said Yavatkar.

That means computation methods have to change. Digital computing and GPUs are powering today’s AI models, but they won’t be enough for the AGI world of constant computation and processing.

“We need new breakthroughs in architecture and algorithmic knowledge,” said Yavakar. “That’s what researchers are working on now. We need completely new algorithms. [The current] approach is not going to be the one used for AGI, in my opinion. Whatever it is will require lots of computational power.”

Yavaker believes we need a new type of computational processing that combines quantum and analog computing.

Unlike digital computers, which compute in discrete bits expressed in binary code (0s and 1s), he said, analog computers compute using real-life signals — such as electrical currents or voltages — on a continuous basis. That may make analog computing more suited to the AGI world of constant computation and processing.

Quantum computers, still in their infancy, tap into principles of quantum mechanics to process data in ways that traditional computers can’t, making them able to solve complex problems much faster. “Quantum… provides a kind of computational ability which is completely different from conventional computing, just several orders of magnitude faster,” Yavaker said.

He posits that AGI will require the unique qualities of both quantum computing and analog computing — neither of which is widely available in the marketplace today. Developing a type of processing that combines the two will add to the timeline to get to AGI.

The road to AGI

Deloitte’s Golden believes that AGI systems will be more than just about compute power. “I think there’s going to have to be new systems that amplify potential, rather than just looking at what automates it, what evolves it. It’s like an evolutionary partnership between the tech, the people, and the ecosystem — that symbiotic blueprint that’s going to have to be trustworthy, that’s going to have to be ethical. That blueprint doesn’t actually exist today,” she said.

It can be a bit daunting to know that the investments being made in AI today will not directly translate forward into AGI in the future. But that doesn’t mean that they are wasted. AI and AGI will continue to coexist with their separate use cases, and your investments in traditional AI now will continue to bear fruit in the future.

Because so much technology has yet to be invented, both Golden and Yavatkar believe AGI is at least five if not ten years away from showing up. But both say it will arrive, and Golden believes that it will have a purpose and not just be a solution in search of a problem.

“The intent is to do real-world reasoning, real-world understanding, goal-driven reasoning — you know, looking at ways that you can actually progress while understanding the sense of the real world,” she said.

Garnter’s Resnik has a different take. He believes that for AGI to be achieved, certain programmatic plateaus must first be reached, and those breakthroughs are so far beyond our current level of programming that it’s hard to say when, if ever, they will be achieved.

“If you’re just training AGI with data and code and numbers, then no, I don’t think you’re training AGI. If you’re training AGI with human interactions and human experiences, and it’s essentially growing from child to adult with those experiences, then yes, I do think that there’s the possibility of AGI,” he said.

Is it possible for technology to achieve wisdom? Resnick cites the 1983 movie WarGames, where an AI computer system learns that the only way to win nuclear war is not to play. “When AI gets wisdom and understanding, then we’re at AGI,” he said.

However, Gartner does have a target date in mind for when we will start to see glimmerings of AGI. “Around 2035 is when we’re going to start seeing real breakthroughs towards AGI,” Resnick said.

Over the next 10 years, he predicts slow, steady progress but not a giant breakthrough. “It’s not something that we’re going to flip a switch one day and wake up and we’ve got AGI. It’s going to be incremental,” he said.AGI explained: Artificial intelligence with humanlike cognition – ComputerworldRead More