Former OpenAI leader blasts company for ignoring ‘safety culture’

Not all the departures from OpenAI have been on the best of terms. Jan Leike, a coleader in the company’s superalignment group who left the company Wednesday, among a growing series of departures, has taken to X to explain his decision—and he has some harsh words for his former employer.

Leike said leaving OpenAI was “one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much smarter than us.” However, he said, he chose to depart the company because he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Leike left OpenAI within hours of the announcement that cofounder and chief scientist Ilya Sutskever was departing. Among Leike’s roles was ensuring the company’s AI systems aligned with human interests. (He had been named as one of Time magazine’s 100 most influential people in AI last year.)

In the lengthy thread, Leike accused OpenAI and its leaders of neglecting “safety culture and processes” in favor of “shiny products.” (Leike’s problems with CEO Sam Altman seemingly go back to before the attempt to remove him from the company last November. While many employees objected to the board’s actions and wrote an open letter threatening to leave the company and go work with Altman elsewhere, Leike’s name was not among the signatures.)  

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity.”

Bloomberg, on Friday, reported that OpenAI has dissolved the superalignment team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that team.

Fears over AI destroying humanity or the planet might seem like something pulled from Terminator, but Leike and other big AI scientists say the concept isn’t as absurd as it seems. Geoff Hinton, one of the most notable names in AI, says there’s a 10% chance AI will wipe out humanity in the next 20 years. Yoshua Bengio, another noted AI scientist, puts those odds at 20%. Leike has been even more fatalistic in the past, putting the p(doom) score (probability of doom), which runs from zero to 100, between 10 and 90.

“We are long overdue in getting incredibly serious about the implications of AGI [artificial generalized intelligence],” Leike wrote. “We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

Read the complete thread here.

Altman responded on X, saying he was “super appreciative” of Leike’s contributions to the company’s safety culture. “He’s right,” Altman replied, “We have a lot more to do; we are committed to doing it,” noting he would follow up soon with a longer post.

i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024

Leike did not respond to queries asking him to expound further on his thoughts.

Leike’s comments, however, raise questions about the status of the pledge OpenAI made in July of 2023 to dedicate 20% of its computational resources toward the effort to superalign its AI models as part of its quest to develop responsible AGI.

An AI system is considered to be “aligned” if it is attempting to do the things humans ask it to. “Unaligned” AI attempts to do things outside of human control.

Leike ended his missive with a plea to his former coworkers, saying, “Learn to feel the AGI. Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed. I am counting on you. The world is counting on you.”

https://www.fastcompany.com/91127491/former-openai-leader-jan-leike-blasts-company-for-ignoring-safety-culture?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1y | May 17, 2024, 9:40:08 PM


Login to add comment

Other posts in this group

How Sega’s surprise Saturn launch backfired—and changed gaming forever

In May of 1995, the video game industry hosted its first major trade show. Electronic Entertainment Expo (E3) was designed to shine a spotlight on games, and every major player wanted to stand in

Jul 14, 2025, 12:40:06 PM | Fast company - tech
What are ‘tokenized’ stocks, and why are trading platforms like Robinhood offering them?

Robinhood cofounder and CEO Vlad Tenev channeled Hollywood glamour last month in Cannes at an extravagantly produced event unveiling of the trading platform’s newest products, including a tokenize

Jul 14, 2025, 12:40:05 PM | Fast company - tech
‘Johnny Mnemonic’ predicted our addictive digital future

In the mid-1990s, Hollywood began trying to envision the internet (sometimes called the “information superhighway”) and its implications for life and culture. Some of its attempts have aged better

Jul 14, 2025, 12:40:04 PM | Fast company - tech
The era of free AI scraping may be coming to an end

Ever since AI chatbots arrived, it feels as if the media has been on the losing end o

Jul 14, 2025, 10:20:06 AM | Fast company - tech
5 work-from-home purchases worth splurging for

Aside from the obvious, one of the best parts of the work-from-home revolution is being able to outfit your workspace as you see fit.

And if you spend your days squinting at a tiny lapto

Jul 14, 2025, 5:40:05 AM | Fast company - tech
A newly discovered exoplanet rekindles humanity’s oldest question: Are we alone?

Child psychologists tell us that around the age of five or six, children begin to seriously contemplate the world around them. It’s a glorious moment every parent recognizes—when young minds start

Jul 13, 2025, 11:10:06 AM | Fast company - tech