OpenAI’s Sam Altman was fired on Friday from his role as leader of the artificial intelligence company. Little is known about what led to the ouster. But one name keeps popping up as the company plots its next moves: Ilya Sutskever.
Altman reportedly was at odds with members of the board over how fast to develop the tech and ensure profits. Sutskever, the company’s chief scientist and cofounder (and board member), was on the other side of the “fault lines” from Altman, as tech journalist Kara Swisher put it on X, the platform formerly known as Twitter.
During an all-hands meeting on Friday, Sutskever reportedly denied suggestions that it was a “hostile takeover,” insisting it was a move to protect the company’s mission, the New York Times reported. In an internal memo, obtained by Axios, OpenAI’s chief operating officer Brad Lightcap reportedly told team members that the departures of Altman and OpenAI cofounder Greg Brockman were not “in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”
Regardless, Altman’s departure has thrust Sutskever even more into the spotlight. But who exactly is he?
Sutskever, born in Soviet Russia and raised in Israel, has been interested in AI from the early days. He started off as a student in the Machine Learning Group at the University of Toronto with AI pioneer Geoffrey Hinton. Hinton, who won the 2018 Turing Award for his work on deep learning, left Google earlier this year over fears about AI becoming more intelligent than humans.
Sutskever did his postdoc at Stanford University with Andrew Ng, another well recognized leader in AI. Sutskever then helped built a neural network called AlexNet before joining Google’s Brain Team roughly a decade ago. After spending about three years at the Big Tech company, Sutskever, who speaks Russian, Hebrew, and English, was recruited as a founding member of OpenAI. It seemed like a perfect fit.
“I remember Sam [Altman] referring to Ilya as one of the most respected researchers in the world,” Dalton Caldwell, managing director of investments at Y Combinator, said in an interview for a story about Sutskever with MIT Technology Review that was published just last month. “He thought that Ilya would be able to attract a lot of top AI talent. He even mentioned that Yoshua Bengio, one of the world’s top AI experts, believed that it would be unlikely to find a better candidate than Ilya to be OpenAI’s lead scientist.” OpenAI cofounder Elon Musk has called Sutskever the “linchpin” to OpenAI’s success.
OpenAI first launched its GPT large language model in 2016, though it didn’t make its way to the public until last November. Once it got into the masses hands for free, tech seemed to be forever changed.
Sutskever has been less of a public face for the company than Altman and others, and he hasn’t done many interviews. When has spoken to the media, he frequently highlights AI’s profound potential for good and bad, especially as systems approach artificial general intelligence (AGI).
“AI is a great thing. It will solve all the problems that we have today. It will solve unemployment, disease, poverty,” he said in a recent documentary for The Guardian. “But it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will be come much more extreme. We will have totally automated AI weapons.”
Along with his work on AI, Sutskever appears to be a prolific tweeter of profound quotes. Among the list are: “All you need is to be less perplexed,” “the biggest obstacle to seeing clearly is the belief that one already sees clearly,” and “Ego is the enemy of growth.”
Lately, he’s been focused on how to contain “superintelligence.” Sutskever is concerned with the problem of ensuring that future AI systems—those much smarter than humans—will still follow human intent.
Currently, OpenAI and other companies working on large language models use reinforcement learning from human feedback to create what’s known as alignment, but Sutskever has signaled that method isn’t scalable as these models reach what he calls “superintelligence.” In July, he and Head of Alignment Jan Leike created a superalignment team, dedicating 20% of the OpenAI’s computing resources toward solving this problem within the next four years.
“While this is an incredibly ambitious goal and we’re not guaranteed to succeed,” the company said in a blog post announcing the effort, “we are optimistic that a focused, concerted effort can solve this problem.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast


The latest TikTok trend is leading to fire evacuations at schools across Connecticut.
As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a

New AI features from LinkedIn will soon help job seekers find positions that best suit them—without the n

As the arms race in the artificial intelligence world ramps up, Big Tech companies are rushing to become your default AI source. Meta, last week, launched the Meta AI app to challenge ChatGPT and

Residents living near SpaceX headquarters in Boca Chica, Texas, will soon have a new public body through which to raise concerns about everything from road maintenance to garbage collection. Earli