How OpenAI’s Jerry Tworek found a new way forward for large language models

The impressive intelligence gains in OpenAI’s models over time have mainly come from training them with progressively more training data, for longer amounts of time, and with massive computing power. But in 2024 new training data has become scarce and it’s become very expensive to further scale up computing power, so AI labs have sought new ways to continue pushing models toward artificial general intelligence (AGI), or AI that’s generally smarter than human beings. 

“I think that the scaling hypothesis landscape is much more multidimensional and we can scale multiple different things,” says OpenAI researcher Jerry Tworek, whose research in recent years has focused on AI models that can “think” about different approaches to solving complex problems, rather than relying mostly on what they learned in their pre-training to generate an answer. 

Tworek led the effort at OpenAI to develop the first major model to prove that the new approach works—“o1.” At the end of August OpenAI’s “o1-preview” model rose to the top of the LiveBench leaderboard, which ranks the intelligence of large frontier models. The o1 model takes longer to return answers, because it’s designed to emphasize complex reasoning and accuracy. Access to the model also costs considerably more than OpenAI’s earlier models.

Large language models borrow their design and behaviors from the neurons in the human brain, but Tworek and his team hoped to put more inspiration from the human brain into the o1 models—in this case humans’ approach to problem solving. “What we managed to train our models to do is this very natural way of reasoning,” Tworek says. “It looks a little bit more human. It is the model trying things in a very fluid, intelligent fashion.”

The model, for example, might play out one problem solving strategy to see if it leads to a solution, and switch to another approach if it doesn’t. Or, if it tries a particular tactic or branch in its reasoning that doesn’t bear fruit, it might backtrack and try another way forward. 

“There’s that pondering and deliberation and a lot of exploration when solving a problem,” he says. “That’s something that the [earlier] models were probably doing a little bit, but not that much, before and we really tried to double down on that.”

Tworek’s contribution to the evolution of OpenAI’s models is considerable, and growing. He’s been at the company through the company’s most important years. He arrived almost six years ago after spending a few years developing quantitative investment strategies at a hedge fund in Amsterdam.  

“I joined OpenAI when it was still a nonprofit,” Tworek says. “It was a small research lab, like a few cool people in San Francisco.” He was struck, however, by the young company’s big ambitions. “I was living in Europe before and you don’t often meet people who will say ‘Oh, Jerry, we are going to build AGI, are you in or not?’”

And OpenAI had good reason to be ambitious. Tworek arrived just as the startup was finishing up GPT-2, the first model that showed that supersizing training data and computing power could yield surprising intelligence gains. The company’s goal of building AGI was beginning to seem possible.

Six years later, some AI researchers, including OpenAI mastermind Ilya Sutskever, say the “supersizing” approach isn’t yielding the intelligence returns it once did. That’s why o1’s new approach of scaling computing power at inference time is so important. It may open a new avenue that lets researchers maintain their momentum toward AGI.

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91246222/openai-researcher-jerry-tworek-human-brain-o1-models?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 5mo | Dec 19, 2024, 12:20:05 PM


Login to add comment

Other posts in this group

CrowdStrike lays off 500 workers despite reaffirming a strong 2026 outlook

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast

May 7, 2025, 7:40:05 PM | Fast company - tech
Apple eyes AI-powered search as Safari usage declines

Apple is considering reworking its Safari web browser across its devices to place a greater emphasis on AI-powered search engines, Bloomberg

May 7, 2025, 7:40:04 PM | Fast company - tech
‘The school has to be evacuated’: Connecticut students are setting their Chromebooks on fire for TikTok

The latest TikTok trend is leading to fire evacuations at schools across Connecticut.

As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

May 7, 2025, 5:20:03 PM | Fast company - tech
Netflix is getting a big TV redesign and AI search

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a

May 7, 2025, 2:50:06 PM | Fast company - tech
LinkedIn’s new AI tools help job seekers find smarter career fits

New AI features from LinkedIn will soon help job seekers find positions that best suit them—without the n

May 7, 2025, 2:50:05 PM | Fast company - tech
Meta AI ‘personalized’ chatbot revives privacy fears

As the arms race in the artificial intelligence world ramps up, Big Tech companies are rushing to become your default AI source. Meta, last week, launched the Meta AI app to challenge ChatGPT and

May 7, 2025, 12:40:03 PM | Fast company - tech
Elon Musk’s new city puts SpaceX in the driver’s seat. Could public services be at risk?

Residents living near SpaceX headquarters in Boca Chica, Texas, will soon have a new public body through which to raise concerns about everything from road maintenance to garbage collection. Earli

May 7, 2025, 12:40:02 PM | Fast company - tech