Who is OpenAI chief scientist Ilya Sutskever, and what does he think about the future of AI and ChatGPT?

OpenAI’s Sam Altman was fired on Friday from his role as leader of the artificial intelligence company. Little is known about what led to the ouster. But one name keeps popping up as the company plots its next moves: Ilya Sutskever.

Altman reportedly was at odds with members of the board over how fast to develop the tech and ensure profits. Sutskever, the company’s chief scientist and cofounder (and board member), was on the other side of the “fault lines” from Altman, as tech journalist Kara Swisher put it on X, the platform formerly known as Twitter.

During an all-hands meeting on Friday, Sutskever reportedly denied suggestions that it was a “hostile takeover,” insisting it was a move to protect the company’s mission, the New York Times reported. In an internal memo, obtained by Axios, OpenAI’s chief operating officer Brad Lightcap reportedly told team members that the departures of Altman and OpenAI cofounder Greg Brockman were not “in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”

Regardless, Altman’s departure has thrust Sutskever even more into the spotlight. But who exactly is he?

Sutskever, born in Soviet Russia and raised in Israel, has been interested in AI from the early days. He started off as a student in the Machine Learning Group at the University of Toronto with AI pioneer Geoffrey Hinton. Hinton, who won the 2018 Turing Award for his work on deep learning, left Google earlier this year over fears about AI becoming more intelligent than humans.

Sutskever did his postdoc at Stanford University with Andrew Ng, another well recognized leader in AI. Sutskever then helped built a neural network called AlexNet before joining Google’s Brain Team roughly a decade ago. After spending about three years at the Big Tech company, Sutskever, who speaks Russian, Hebrew, and English, was recruited as a founding member of OpenAI. It seemed like a perfect fit.

“I remember Sam [Altman] referring to Ilya as one of the most respected researchers in the world,” Dalton Caldwell, managing director of investments at Y Combinator, said in an interview for a story about Sutskever with MIT Technology Review that was published just last month. “He thought that Ilya would be able to attract a lot of top AI talent. He even mentioned that Yoshua Bengio, one of the world’s top AI experts, believed that it would be unlikely to find a better candidate than Ilya to be OpenAI’s lead scientist.” OpenAI cofounder Elon Musk has called Sutskever the “linchpin” to OpenAI’s success.

OpenAI first launched its GPT large language model in 2016, though it didn’t make its way to the public until last November. Once it got into the masses hands for free, tech seemed to be forever changed.

Sutskever has been less of a public face for the company than Altman and others, and he hasn’t done many interviews. When has spoken to the media, he frequently highlights AI’s profound potential for good and bad, especially as systems approach artificial general intelligence (AGI).

“AI is a great thing. It will solve all the problems that we have today. It will solve unemployment, disease, poverty,” he said in a recent documentary for The Guardian. “But it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will be come much more extreme. We will have totally automated AI weapons.”

Along with his work on AI, Sutskever appears to be a prolific tweeter of profound quotes. Among the list are: “All you need is to be less perplexed,” “the biggest obstacle to seeing clearly is the belief that one already sees clearly,” and “Ego is the enemy of growth.”

Lately, he’s been focused on how to contain “superintelligence.” Sutskever is concerned with the problem of ensuring that future AI systems—those much smarter than humans—will still follow human intent.

Currently, OpenAI and other companies working on large language models use reinforcement learning from human feedback to create what’s known as alignment, but Sutskever has signaled that method isn’t scalable as these models reach what he calls “superintelligence.” In July, he and Head of Alignment Jan Leike created a superalignment team, dedicating 20% of the OpenAI’s computing resources toward solving this problem within the next four years.

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed,” the company said in a blog post announcing the effort, “we are optimistic that a focused, concerted effort can solve this problem.”

https://www.fastcompany.com/90985752/ilya-sutskever-openai-chief-scientist?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 2y | 2023. nov. 18. 20:50:08


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

Texas flood recovery efforts face an unexpected obstacle: drones

The flash floods that have devastated Texas are already a difficult crisis to manage. More than 100 people are confirmed dead

2025. júl. 8. 17:40:02 | Fast company - tech
The internet is trying—and failing—to spend Elon Musk’s $342 billion

How would you spend $342 billion?

A number of games called “Spend Elon Musk’s Money” have been popping up online, inviting users to imagine how they’d blow through the

2025. júl. 8. 15:20:07 | Fast company - tech
What happened at Wimbledon? ‘Human error’ blamed for ball-tracking tech mishap

The All England Club, somewhat ironically, is blaming “human error” for a glaring mistake by the electronic

2025. júl. 8. 15:20:04 | Fast company - tech
Elon Musk has ‘fixed’ Grok—to be more like him than ever

As Elon Musk announced plans over the Fourth of July weekend to establish a third political party,

2025. júl. 8. 12:50:09 | Fast company - tech
Dr. Becky is the parenting guru for the social media era. Now she’s an AI chatbot, too

Dolores Ballesteros, a Mexico-based mother of two, was getting desperate. Her 6-year-old son kept hitting his brother, age 3, and seemed angry at her all the time. No matter what she did, she coul

2025. júl. 8. 12:50:07 | Fast company - tech