What is AGI in AI, and why are people so worried about it?

We used to worry about AI becoming “sentient,” or that something called the “singularity” would occur and AIs would begin creating other AIs on their own. The new goal posts are something called artificial general intelligence, or AGI—a term that’s being subsumed into the realm of AI marketing and influence-pushing.

Here’s what you need to know.

How do we define AGI?

AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform, and perform it better. An alternative definition from Stanford’s Institute for Human-Centered Artificial Intelligence defines AGI as “broadly intelligent, context-aware machines . . . needed for effective social chatbots or human-robot interaction.” The consulting company Gartner defines artificial general intelligence as “a form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains. It can be applied to a much broader set of use cases and incorporates cognitive flexibility, adaptability, and general problem-solving skills.”

Gartner’s definition is particularly interesting because it nods at the aspect of AGI that makes us nervous: autonomy. Superintelligent systems of the future might be smart enough (and unsafe enough) to work outside of a human operator’s awareness, or work together toward goals they set for themselves.

What’s the difference between AGI and AI?

AGI is an advanced form of AI. AI systems include “narrow AI” systems that do just one specific thing, like recognizing objects within videos, with a cognitive level lesser than humans. An AGI refers to systems that are generalists; that is, they can learn to do a wide variety of tasks at a cognitive level equal to or greater than a human. Such a system might be used to help a human plan a complex trip one day and to find novel combinations of cancer drug compounds the next.

Should we fear AGI? 

Is it time to become concerned about AGI? Probably not. Current AI systems have not risen to the level of AGI. Not yet. But many people inside and outside the AI industry believe that the advent of large language models like GPT-4 have shortened the timeline for reaching that goal.

There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skill will permit them to invent their own plans and objectives. Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build in thoughtful safety guardrails at every step.

How far away is AGI?

There’s a lot of disagreement over how soon the artificial general intelligence moment will arrive. Microsoft researchers say they’ve already seen “sparks” of AGI in GPT-4 (Microsoft owns 49% of OpenAI). Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.

Google Brain cofounder and current Landing AI CEO Andrew Ng says the tech industry is still “very far” from achieving systems smart enough to do things like that. And he’s concerned about the misuse of the term itself. “The term AGI is so misunderstood,” he says.

“I think that it’s very muddy definitions of AGI that make people jump on the ‘are we getting close to AGI?’ question,” Ng says. “And the answer is no, unless you change the definition of AGI, in which case you could totally be there in three years or maybe even 30 years ago.”

Why AGI is still so divisive in the broader AI field

People may be stretching the definition of AGI to suit their own ends, Ng believes. “The problem with redefining things is people are so emotional, positive and negative; they have hopes and fears attached to the term AGI. And when you have companies that say they reached AGI because they changed the definition, it just generates a lot of hype.”

OpenAI’s definition of the term, in fact, has been somewhat flexible. The company, whose stated goal is to create AGI, defines artificial general intelligence in its charter (published in 2018) as “highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” But OpenAI’s CEO Sam Altman has more recently defined AGI as “AI systems that are generally smarter than humans,” a seemingly lower bar to hit.Hype can fuel interest and investment in a technology but it can also create a bubble of expectations that, when unmet, eventually bursts. That’s perhaps the biggest risk to the current AI boom. Some very good things might result from advances in generative AI, but it will take time.

https://www.fastcompany.com/90990042/agi-ai-explained?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 1. 12. 2023 20:50:04


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

This free email scam detector gives you the protection Gmail and Outlook don’t

I don’t know if you’ve noticed, but email scams are getting surprisingly sophisticated.

We’ve had a handful of instances here at The Intelligence International Headquarters where we’ve h

9. 8. 2025 12:20:05 | Fast company - tech
You might want a VPN on your phone. Here’s how to get started

Interest in virtual private networks (VPNs) has surged in America and Europe this year. Countries on both sides of the Atlantic have recently enacted new age-verification laws designed to prevent

9. 8. 2025 9:50:05 | Fast company - tech
Instagram’s new location sharing map: how it works and how to turn it off

Instagram’s new location-sharing Map feature is raising privacy concerns among some users, who worry their whereab

8. 8. 2025 17:40:06 | Fast company - tech
The one part of crypto that’s still in crypto winter

Crypto is booming again. Bitcoin is near record highs, Walmart and Amazon are report

8. 8. 2025 13:10:06 | Fast company - tech
Podcasting is bigger than ever—but not without its growing pains

Greetings, salutations, and thanks for reading Fast Company’s Plugged In.

On August 4, Amazon announced that it was restructuring its Wondery podcast studio. The compan

8. 8. 2025 13:10:04 | Fast company - tech
‘Clanker’ is the internet’s favorite slur—and it’s aimed at AI

AI skeptics have found a new way to express their disdain for the creeping presence of

8. 8. 2025 10:50:02 | Fast company - tech
TikTok is losing it over real-life octopus cities

Remember when the internet cried actual tears for an anglerfish earli

7. 8. 2025 23:20:03 | Fast company - tech