Former OpenAI leader blasts company for ignoring ‘safety culture’

Not all the departures from OpenAI have been on the best of terms. Jan Leike, a coleader in the company’s superalignment group who left the company Wednesday, among a growing series of departures, has taken to X to explain his decision—and he has some harsh words for his former employer.

Leike said leaving OpenAI was “one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much smarter than us.” However, he said, he chose to depart the company because he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Leike left OpenAI within hours of the announcement that cofounder and chief scientist Ilya Sutskever was departing. Among Leike’s roles was ensuring the company’s AI systems aligned with human interests. (He had been named as one of Time magazine’s 100 most influential people in AI last year.)

In the lengthy thread, Leike accused OpenAI and its leaders of neglecting “safety culture and processes” in favor of “shiny products.” (Leike’s problems with CEO Sam Altman seemingly go back to before the attempt to remove him from the company last November. While many employees objected to the board’s actions and wrote an open letter threatening to leave the company and go work with Altman elsewhere, Leike’s name was not among the signatures.)  

“Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute [total computational resources] and it was getting harder and harder to get this crucial research done,” he wrote. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity.”

Bloomberg, on Friday, reported that OpenAI has dissolved the superalignment team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that team.

Fears over AI destroying humanity or the planet might seem like something pulled from Terminator, but Leike and other big AI scientists say the concept isn’t as absurd as it seems. Geoff Hinton, one of the most notable names in AI, says there’s a 10% chance AI will wipe out humanity in the next 20 years. Yoshua Bengio, another noted AI scientist, puts those odds at 20%. Leike has been even more fatalistic in the past, putting the p(doom) score (probability of doom), which runs from zero to 100, between 10 and 90.

“We are long overdue in getting incredibly serious about the implications of AGI [artificial generalized intelligence],” Leike wrote. “We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all humanity. OpenAI must become a safety-first AGI company.”

Read the complete thread here.

Altman responded on X, saying he was “super appreciative” of Leike’s contributions to the company’s safety culture. “He’s right,” Altman replied, “We have a lot more to do; we are committed to doing it,” noting he would follow up soon with a longer post.

i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024

Leike did not respond to queries asking him to expound further on his thoughts.

Leike’s comments, however, raise questions about the status of the pledge OpenAI made in July of 2023 to dedicate 20% of its computational resources toward the effort to superalign its AI models as part of its quest to develop responsible AGI.

An AI system is considered to be “aligned” if it is attempting to do the things humans ask it to. “Unaligned” AI attempts to do things outside of human control.

Leike ended his missive with a plea to his former coworkers, saying, “Learn to feel the AGI. Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed. I am counting on you. The world is counting on you.”

https://www.fastcompany.com/91127491/former-openai-leader-jan-leike-blasts-company-for-ignoring-safety-culture?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvorené 23d | 17. 5. 2024, 21:40:08


Ak chcete pridať komentár, prihláste sa

Ostatné príspevky v tejto skupine

For AI to really succeed, we need to protect private data

A few days ago, 13 AI employees wrote an open letter calling for greater transparency into AI operations. They underst

9. 6. 2024, 12:20:02 | Fast company - tech
Get in shape for the summer with these AI-infused fitness apps

Whelp, let’s get this over with. It’s almost summer, which means it’s time to start getting back into shape again.

Luckily, we’re living in the golden age of AI, so instead of shuffling

9. 6. 2024, 5:20:07 | Fast company - tech
This classic answer engine still outsmarts AI chatbots

Everyone’s talking about AI chatbots, but they’re not the best tool for every job. I’m not knocking AI here—those large language models like ChatGPT, Bing Chat, and Google Bard are impressive.

8. 6. 2024, 15:40:05 | Fast company - tech
Three burning AI questions that Apple needs to answer at WWDC 24

We are just a few days away from Apple’s Worldwide Developers Conference, the annual event where Apple shows off its newest operating systems for iPhone, iPad, Mac, and more. Yet Monday’s WW

8. 6. 2024, 11:10:08 | Fast company - tech
Scientists don’t know much about the heliosphere’s shape. An interstellar probe could change that

The Sun warms the Earth, making it habitable for people and animals. But that’s not all it does, and it affects a much larger area of space.

8. 6. 2024, 8:50:04 | Fast company - tech