Is an AGI breakthrough the cause of the OpenAI drama?

The speculation began almost immediately after Friday’s sudden dismissal of OpenAI CEO Sam Altman. The OpenAI board gave little reason for firing Altman, other than to say he had been less than honest with board members. But honest about what, exactly?

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI)—that is, AI capable of carrying out a wide variety of tasks better than humans can perform them. Developing AGI, after all, is OpenAI’s main goal, as per its mission statement.

Reports over the weekend quoted unnamed sources saying the board believed Altman was hurrying to market with new AI products without giving the company’s safety teams enough time to install guardrails. It’s possible. There is a school of thought within OpenAI that it’s better to get new research out into the real world in the form of products (like ChatGPT) in order to gain an understanding of their benefits and risks. Altman is said to have been eager to capitalize on the success of ChatGPT, to launch new products using the momentum of the chatbot phenom.

But that still seems like something that could be worked out between a CEO and a board—unless the products in question involve AGI. It’s plausible that safety considerations around an AGI product that’s coming sooner or later could have influenced the action of the board.

Researchers may be closer to AGI than many people think. AI agents are just now becoming “autonomous” and gaining reasoning and task-execution mechanisms. A recent research paper from Google DeepMInd, which offers a framework for tracking research companies’ progress toward AGI, says current systems are considered “competent AI,” meaning that they’re better at a narrow set of tasks than 50% of humans. That’s just one step away from “competent AGI,” in which the AI is better than most humans at most tasks.

But how will we know if or when OpenAI has reached its AGI goal? That’s the tough part, especially when the definition of AGI is somewhat flexible, as Google Brain cofounder Andrew Ng told Fast Company last month. Ng, now CEO and founder of Landing AI, says some people are tempted to adjust their definition of AGI to make it easier to meet in a technical sense.

OpenAI has done exactly that. Chief scientist Ilya Sutskever has said that meeting the bar for AGI requires a system that can be taught to do anything a human can be taught to do. But OpenAI has used a less-demanding definition in the past—defining AGI as systems that surpass human capabilities in a majority of economically valuable tasks. That’s the definition that appears on the “Structure” page of the OpenAI website today. It also makes clear that it is the OpenAI board that will define what AGI is and is not.

Then there’s the problem of transparency. Developer Sophia Huang points out that the release of GPT-2 was the last time OpenAI was open and transparent about its latest model—enough so that developers could read the research paper and then recreate the model. But that was four years ago, February 2019.

That was the same year that Altman led the creation of a for-profit (aka, “capped profit”) organization within OpenAI’s structure. In a general sense, for-profit AI companies have a better chance of attracting investment money if they don’t give away their IP on Github. Since 2019, OpenAI has grown more competitive, and more secretive about the details of its models.

If an OpenAI system has achieved AGI, or if its researchers can see a path to it, the implications are too big to be handled by one company. Most would agree that understanding and mitigating the risks of AGI is an effort that should involve governments, researchers, academia, and others around the world.

If and when AGI arrives and spreads, there’s a high likelihood that AI systems will begin eliminating jobs. If the Industrial Revolution is any guide, while AI will surely create some new jobs, it’ll likely eliminate many, many more. This means massive labor displacement and social upheaval. There’s also a real risk that the benefits of the new technology will be distributed unevenly—that it will further concentrate power in the hands of the wealthy and powerful.

Nobody knows exactly why Altman was ousted from OpenAI. He and ex-board chairman Greg Brockman are said to be headed to new gigs at Microsoft (the clear winner in the whole situation). Meanwhile, the majority of OpenAI’s 700 employees are now threatening to quit if the existing board members don’t resign, with Altman and Brockman reinstated. This is the most dramatic news cycle we’ve seen around a tech company in years, maybe ever. AGI safety might be at the center of it.

https://www.fastcompany.com/90986053/agi-openai-drama-sam-altman?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1y | Nov 20, 2023, 8:50:04 PM


Login to add comment

Other posts in this group

CrowdStrike lays off 500 workers despite reaffirming a strong 2026 outlook

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast

May 7, 2025, 7:40:05 PM | Fast company - tech
Apple eyes AI-powered search as Safari usage declines

Apple is considering reworking its Safari web browser across its devices to place a greater emphasis on AI-powered search engines, Bloomberg

May 7, 2025, 7:40:04 PM | Fast company - tech
‘The school has to be evacuated’: Connecticut students are setting their Chromebooks on fire for TikTok

The latest TikTok trend is leading to fire evacuations at schools across Connecticut.

As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

May 7, 2025, 5:20:03 PM | Fast company - tech
Netflix is getting a big TV redesign and AI search

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a

May 7, 2025, 2:50:06 PM | Fast company - tech
LinkedIn’s new AI tools help job seekers find smarter career fits

New AI features from LinkedIn will soon help job seekers find positions that best suit them—without the n

May 7, 2025, 2:50:05 PM | Fast company - tech
Meta AI ‘personalized’ chatbot revives privacy fears

As the arms race in the artificial intelligence world ramps up, Big Tech companies are rushing to become your default AI source. Meta, last week, launched the Meta AI app to challenge ChatGPT and

May 7, 2025, 12:40:03 PM | Fast company - tech
Elon Musk’s new city puts SpaceX in the driver’s seat. Could public services be at risk?

Residents living near SpaceX headquarters in Boca Chica, Texas, will soon have a new public body through which to raise concerns about everything from road maintenance to garbage collection. Earli

May 7, 2025, 12:40:02 PM | Fast company - tech