Is an AGI breakthrough the cause of the OpenAI drama?

The speculation began almost immediately after Friday’s sudden dismissal of OpenAI CEO Sam Altman. The OpenAI board gave little reason for firing Altman, other than to say he had been less than honest with board members. But honest about what, exactly?

One popular theory on X posits that there’s an unseen factor hanging in the background, animating the players in this ongoing drama: the possibility that OpenAI researchers have progressed further than anyone knew toward artificial general intelligence (AGI)—that is, AI capable of carrying out a wide variety of tasks better than humans can perform them. Developing AGI, after all, is OpenAI’s main goal, as per its mission statement.

Reports over the weekend quoted unnamed sources saying the board believed Altman was hurrying to market with new AI products without giving the company’s safety teams enough time to install guardrails. It’s possible. There is a school of thought within OpenAI that it’s better to get new research out into the real world in the form of products (like ChatGPT) in order to gain an understanding of their benefits and risks. Altman is said to have been eager to capitalize on the success of ChatGPT, to launch new products using the momentum of the chatbot phenom.

But that still seems like something that could be worked out between a CEO and a board—unless the products in question involve AGI. It’s plausible that safety considerations around an AGI product that’s coming sooner or later could have influenced the action of the board.

Researchers may be closer to AGI than many people think. AI agents are just now becoming “autonomous” and gaining reasoning and task-execution mechanisms. A recent research paper from Google DeepMInd, which offers a framework for tracking research companies’ progress toward AGI, says current systems are considered “competent AI,” meaning that they’re better at a narrow set of tasks than 50% of humans. That’s just one step away from “competent AGI,” in which the AI is better than most humans at most tasks.

But how will we know if or when OpenAI has reached its AGI goal? That’s the tough part, especially when the definition of AGI is somewhat flexible, as Google Brain cofounder Andrew Ng told Fast Company last month. Ng, now CEO and founder of Landing AI, says some people are tempted to adjust their definition of AGI to make it easier to meet in a technical sense.

OpenAI has done exactly that. Chief scientist Ilya Sutskever has said that meeting the bar for AGI requires a system that can be taught to do anything a human can be taught to do. But OpenAI has used a less-demanding definition in the past—defining AGI as systems that surpass human capabilities in a majority of economically valuable tasks. That’s the definition that appears on the “Structure” page of the OpenAI website today. It also makes clear that it is the OpenAI board that will define what AGI is and is not.

Then there’s the problem of transparency. Developer Sophia Huang points out that the release of GPT-2 was the last time OpenAI was open and transparent about its latest model—enough so that developers could read the research paper and then recreate the model. But that was four years ago, February 2019.

That was the same year that Altman led the creation of a for-profit (aka, “capped profit”) organization within OpenAI’s structure. In a general sense, for-profit AI companies have a better chance of attracting investment money if they don’t give away their IP on Github. Since 2019, OpenAI has grown more competitive, and more secretive about the details of its models.

If an OpenAI system has achieved AGI, or if its researchers can see a path to it, the implications are too big to be handled by one company. Most would agree that understanding and mitigating the risks of AGI is an effort that should involve governments, researchers, academia, and others around the world.

If and when AGI arrives and spreads, there’s a high likelihood that AI systems will begin eliminating jobs. If the Industrial Revolution is any guide, while AI will surely create some new jobs, it’ll likely eliminate many, many more. This means massive labor displacement and social upheaval. There’s also a real risk that the benefits of the new technology will be distributed unevenly—that it will further concentrate power in the hands of the wealthy and powerful.

Nobody knows exactly why Altman was ousted from OpenAI. He and ex-board chairman Greg Brockman are said to be headed to new gigs at Microsoft (the clear winner in the whole situation). Meanwhile, the majority of OpenAI’s 700 employees are now threatening to quit if the existing board members don’t resign, with Altman and Brockman reinstated. This is the most dramatic news cycle we’ve seen around a tech company in years, maybe ever. AGI safety might be at the center of it.

https://www.fastcompany.com/90986053/agi-openai-drama-sam-altman?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 2y | 20 nov 2023, 20:50:04


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

Pinterest bets on ‘additive AI’ as it reimagines personalization

For more than a decade, social platforms have faced criticism for embedding algorithms that fuel compulsive behaviors, encourage doomscrolling, and measure success by time spent glued to screens.

2 sept 2025, 13:50:05 | Fast company - tech
This startup is using AI to take on high real estate commissions

A new startup called Ridley wants to make it cheaper to sell a home by challenging the traditional real estate commission model.

Founder and CEO

2 sept 2025, 13:50:03 | Fast company - tech
5 ways to write better AI prompts

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

2 sept 2025, 11:20:06 | Fast company - tech
This scrappy developer is bringing back what millions loved about Trello

For all the many features it’s been lobbing into the world lately, Trello hasn’t given its most dedicated fans the one thing many of them crave most—and that’s a ticket back in t

2 sept 2025, 6:40:07 | Fast company - tech
How I took control of my email address with a custom domain

Over the past three years, I’ve changed email providers three times without ever changing email addresses.

That’s because my address is entirely under my control. Instead of relying on a

1 sept 2025, 14:30:04 | Fast company - tech
This viral grocery hack will help you save money and reduce waste

If you dread the weekly grocery shop, or get sidetracked by fun snacks only to end up with no real meals, this might be the hack for you.

The 5-4-3-2-1 method gives shoppers like you a s

31 ago 2025, 13:10:02 | Fast company - tech
Do Trump’s tariffs mean you’ll pay more for the iPhone 17 next month?

If 2025 is the year of anything, it is the year of the tariff. Ever since President Trump unleashed his

30 ago 2025, 11:30:07 | Fast company - tech