Why everyone seems to disagree on how to define Artificial General Intelligence

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

At the TED AI conference in SF, little consensus on AGI 

We are just now seeing the first applications of new generative AI, but lots of people in the AI field are already thinking about the next frontier—Artificial General Intelligence. AGI was certainly on the minds of many at the TED AI conference in San Francisco Tuesday. But I didn’t hear a lot of consensus about when AGI systems will arrive, nor how we should define it in the first place.

The term AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform. Others say AGI refers to systems that can learn completely new tasks without the help of explicit instructions or examples in their training data. Ilya Sutskever, the chief scientist at OpenAI (whose goal is to eventually build AGI systems), gave a fairly conventional (if vague) definition on stage at the TED conference on Tuesday, saying meeting the bar for AGI requires a system that can be taught to do anything a human can be taught to do. But OpenAI has used a less-demanding definition in the past—defining AGI as systems that surpass human capabilities in a majority of economically valuable tasks. One source at the event who spoke on the condition of anonymity told me that AI companies are beginning to manipulate the definition of the term in order to lower the bar for claiming AGI capabilities. The company that can first achieve some definition of AGI would get lots of attention—and probably an increase in value.

Most of the AI industry believes that transformer models (like the one that powers ChatGPT) are the path to AGI, and that dramatic progress on such models has shortened the timeline for reaching that goal. Microsoft researchers say they’ve already seen “sparks” of AGI in GPT-4 (Microsoft owns 49% of OpenAI). Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.

The definition matters because it could affect the pace at which AI companies focus on building safety features into their models to help mitigate the potential harms of such systems, which are very real. Not only could powerful AGI be used by bad actors to harm others, but it seems possible that such systems could even grow and learn independent of human beings. Obviously, tech companies should be spending a lot of time and energy on safeguarding the models they’ve already built. And they are investing in safety (and certainly talking a lot about it). But a type of arms race is underway, and the economic carrot of building bigger and more performant models is overwhelming any idea of developing AI in slower, safer, ways.

Stanford releases its transparency report card for AI titans

Earlier today, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) released its inaugural Foundation Model Transparency Index (FMTI), plainly laying out the parameters for judging a model’s transparency. Co-developed by a multidisciplinary team from Stanford, MIT, and Princeton, FMTI grades companies on their disclosure of 100 different aspects of their AI’s  foundational models, including how the tech was built and how it’s used in actual applications.

HAI was among the first to warn against the dangers of large AI models, and to suggest the tech ought to be developed in the open, in full view of the AI research community and the public. But as AI becomes a big business, and competition to build the best models intensifies, transparency has suffered, says HAI’s Percy Liang, who directs the Stanford Center for Research on Foundation Models.

This initial version of the index, which Liang says will be updated on an ongoing basis, grades the 10 biggest model developers (OpenAI, Anthropic, Meta, et al). It finds that indeed there’s lots of room for improvement. Meta was the only company that scored higher than 50% on transparency. Interestingly, Anthropic, an offshoot of OpenAI with a focus on safety and transparency, scores lower than OpenAI.

Marc Andreessen: The poster boy for Silicon Valley’s “naive optimism”?

In his recent book, The Coming Wave, DeepMind cofounder Mustafa Suleyman describes the Silicon Valley “naive optimist” as someone who willingly ignores the possible ill effects of new technology (in this case, AI) and presses forward without giving much thought to building in safeguards. Think, one who moves fast and breaks things (like children’s self-esteem, or democracy). Superinvestor Marc Andreessen’s latest screed, titled “The Techno-Optimist Manifesto,” seems to epitomize everything Suleyman warns against. Andreessen, whose net worth reportedly sits at around $1.8 billion, is a longtime investor in AI companies, and stands to reap major rewards if some of his bets pay off. Here are a couple of rich excerpts from Andreessen’s piece:

  • “We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone—we are literally making sand think.”
  • “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
  • “Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. . . . It is deeply immoral, and we must jettison it with extreme prejudice.”

Nowhere in Andreessen’s piece, as Axios’ Ryan Heath wisely points out, do the words  “unintended consequences,” “global warming,” or “climate change” appear.

While people including OpenAI’s Sam Altman publicly call for regulations on AI development, I suspect that many Silicon Valley tech leaders agree with Andreessen. Many believe that AI will bring unprecedented wealth and abundance, and they can’t wait to realize those rewards. But, if Andreessen’s manifesto is any guide, there’s still a dearth of concern for the consequences.

More AI coverage from Fast Company:

From around the web:

https://www.fastcompany.com/90968623/why-everyone-seems-to-disagree-on-how-to-define-artificial-general-intelligence?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2y | 18 oct. 2023, 19:20:04


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

How Zipline’s Keller Cliffton built the world’s largest drone delivery network

Zipline’s cofounder and CEO Keller Cliffton charts the company’s recent expansion from transporting blood for lifesaving transfusions in Rwanda to retail deliveries across eight countries—includin

3 mai 2025, 13:30:10 | Fast company - tech
Skype is shutting down. If you still use it, like I do, here are some alternatives

When Skype debuted in 2003, it was the first time I remember feeling that an individual app—and not just the broader internet—was radically disrupting communications.

Thanks to its imple

3 mai 2025, 11:20:04 | Fast company - tech
This free app is like Shazam for bird calls

It’s spring, and nature is pulling me away from my computer as I write this. The sun is shining, the world is warming up, and the birds are chirping away.

And that got me thinking: What

3 mai 2025, 11:20:03 | Fast company - tech
‘Read the room, girl’: Running influencer Kate Mackz faces backlash over her White House interview

Wake up, the running influencers are fighting again. 

In the hot seat this week is popular running influencer Kate Mackz, who faces heavy backlash over the latest guest on her runni

2 mai 2025, 21:20:07 | Fast company - tech
Half of Airbnb users in the U.S. are now interacting with its AI customer service agent

Half of Airbnb users in the U.S. are now using the company’s AI-powered customer service agent, CEO Brian Chesky said Thursday

2 mai 2025, 21:20:05 | Fast company - tech
What your emoji use says about your personality

Are you guilty of overusing the monkey covering its eyes emoji? Do you find it impossible to send a text without tacking on a laughing-crying face?

Much like choosing between a full stop

2 mai 2025, 16:40:07 | Fast company - tech
SAG-AFTRA’s new influencer committee aims to strengthen support for digital creators

SAG-AFTRA is expanding its reach into the influencer economy.

In late April, the u

2 mai 2025, 14:30:04 | Fast company - tech