Is an AGI breakthrough the cause of the OpenAI drama?

The speculation began almost immediately after Friday’s sudden dismissal of OpenAI CEO Sam Altman. The OpenAI board gave little reason for firing Altman, other than to say he had been less than honest with board members. But honest about what, exactly?

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI)—that is, AI capable of carrying out a wide variety of tasks better than humans can perform them. Developing AGI, after all, is OpenAI’s main goal, as per its mission statement.

Reports over the weekend quoted unnamed sources saying the board believed Altman was hurrying to market with new AI products without giving the company’s safety teams enough time to install guardrails. It’s possible. There is a school of thought within OpenAI that it’s better to get new research out into the real world in the form of products (like ChatGPT) in order to gain an understanding of their benefits and risks. Altman is said to have been eager to capitalize on the success of ChatGPT, to launch new products using the momentum of the chatbot phenom.

But that still seems like something that could be worked out between a CEO and a board—unless the products in question involve AGI. It’s plausible that safety considerations around an AGI product that’s coming sooner or later could have influenced the action of the board.

Researchers may be closer to AGI than many people think. AI agents are just now becoming “autonomous” and gaining reasoning and task-execution mechanisms. A recent research paper from Google DeepMInd, which offers a framework for tracking research companies’ progress toward AGI, says current systems are considered “competent AI,” meaning that they’re better at a narrow set of tasks than 50% of humans. That’s just one step away from “competent AGI,” in which the AI is better than most humans at most tasks.

But how will we know if or when OpenAI has reached its AGI goal? That’s the tough part, especially when the definition of AGI is somewhat flexible, as Google Brain cofounder Andrew Ng told Fast Company last month. Ng, now CEO and founder of Landing AI, says some people are tempted to adjust their definition of AGI to make it easier to meet in a technical sense.

OpenAI has done exactly that. Chief scientist Ilya Sutskever has said that meeting the bar for AGI requires a system that can be taught to do anything a human can be taught to do. But OpenAI has used a less-demanding definition in the past—defining AGI as systems that surpass human capabilities in a majority of economically valuable tasks. That’s the definition that appears on the “Structure” page of the OpenAI website today. It also makes clear that it is the OpenAI board that will define what AGI is and is not.

Then there’s the problem of transparency. Developer Sophia Huang points out that the release of GPT-2 was the last time OpenAI was open and transparent about its latest model—enough so that developers could read the research paper and then recreate the model. But that was four years ago, February 2019.

That was the same year that Altman led the creation of a for-profit (aka, “capped profit”) organization within OpenAI’s structure. In a general sense, for-profit AI companies have a better chance of attracting investment money if they don’t give away their IP on Github. Since 2019, OpenAI has grown more competitive, and more secretive about the details of its models.

If an OpenAI system has achieved AGI, or if its researchers can see a path to it, the implications are too big to be handled by one company. Most would agree that understanding and mitigating the risks of AGI is an effort that should involve governments, researchers, academia, and others around the world.

If and when AGI arrives and spreads, there’s a high likelihood that AI systems will begin eliminating jobs. If the Industrial Revolution is any guide, while AI will surely create some new jobs, it’ll likely eliminate many, many more. This means massive labor displacement and social upheaval. There’s also a real risk that the benefits of the new technology will be distributed unevenly—that it will further concentrate power in the hands of the wealthy and powerful.

Nobody knows exactly why Altman was ousted from OpenAI. He and ex-board chairman Greg Brockman are said to be headed to new gigs at Microsoft (the clear winner in the whole situation). Meanwhile, the majority of OpenAI’s 700 employees are now threatening to quit if the existing board members don’t resign, with Altman and Brockman reinstated. This is the most dramatic news cycle we’ve seen around a tech company in years, maybe ever. AGI safety might be at the center of it.

https://www.fastcompany.com/90986053/agi-openai-drama-sam-altman?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 20. 11. 2023 20:50:04


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

Instagram’s new location sharing map: how it works and how to turn it off

Instagram’s new location-sharing Map feature is raising privacy concerns among some users, who worry their whereab

8. 8. 2025 17:40:06 | Fast company - tech
The one part of crypto that’s still in crypto winter

Crypto is booming again. Bitcoin is near record highs, Walmart and Amazon are report

8. 8. 2025 13:10:06 | Fast company - tech
Podcasting is bigger than ever—but not without its growing pains

Greetings, salutations, and thanks for reading Fast Company’s Plugged In.

On August 4, Amazon announced that it was restructuring its Wondery podcast studio. The compan

8. 8. 2025 13:10:04 | Fast company - tech
‘Clanker’ is the internet’s favorite slur—and it’s aimed at AI

AI skeptics have found a new way to express their disdain for the creeping presence of

8. 8. 2025 10:50:02 | Fast company - tech
TikTok is losing it over real-life octopus cities

Remember when the internet cried actual tears for an anglerfish earli

7. 8. 2025 23:20:03 | Fast company - tech
Why OpenAI’s open-source models matter

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in

7. 8. 2025 18:40:05 | Fast company - tech
4 ways states are placing guardrails around AI

U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given

7. 8. 2025 18:40:04 | Fast company - tech