From training dogs to intelligent machines: Here’s how reinforcement learning is teaching AI

Understanding intelligence and creating intelligent machines are grand scientific challenges of our times. The ability to learn from experience is a cornerstone of intelligence for machines and living beings alike.

In a remarkably prescient 1948 report, Alan Turing—the father of modern computer science—proposed the construction of machines that display intelligent behavior. He also discussed the “education” of such machines “by means of rewards and punishments.”

Turing’s ideas ultimately led to the development of reinforcement learning, a branch of artificial intelligence. Reinforcement learning designs intelligent agents by training them to maximize rewards as they interact with their environment.

As a machine learning researcher, I find it fitting that reinforcement learning pioneers Andrew Barto and Richard Sutton were awarded the 2024 ACM Turing Award.

What is reinforcement learning?

Animal trainers know that animal behavior can be influenced by rewarding desirable behaviors. A dog trainer gives the dog a treat when it does a trick correctly. This reinforces the behavior, and the dog is more likely to do the trick correctly the next time. Reinforcement learning borrowed this insight from animal psychology.

But reinforcement learning is about training computational agents, not animals. The agent can be a software agent like a chess-playing program. But the agent can also be an embodied entity like a robot learning to do household chores. Similarly, the environment of an agent can be virtual, like the chessboard or the designed world in a video game. But it can also be a house where a robot is working.

Just like animals, an agent can perceive aspects of its environment and take actions. A chess-playing agent can access the chessboard configuration and make moves. A robot can sense its surroundings with cameras and microphones. It can use its motors to move about in the physical world.

Agents also have goals that their human designers program into them. A chess-playing agent’s goal is to win the game. A robot’s goal might be to assist its human owner with household chores.

The reinforcement learning problem in AI is how to design agents that achieve their goals by perceiving and acting in their environments. Reinforcement learning makes a bold claim: All goals can be achieved by designing a numerical signal, called the reward, and having the agent maximize the total sum of rewards it receives.

Researchers do not know if this claim is actually true, because of the wide variety of possible goals. Therefore, it is often referred to as the reward hypothesis.

Sometimes it is easy to pick a reward signal corresponding to a goal. For a chess-playing agent, the reward can be +1 for a win, 0 for a draw, and -1 for a loss. It is less clear how to design a reward signal for a helpful household robotic assistant. Nevertheless, the list of applications where reinforcement learning researchers have been able to design good reward signals is growing.

A big success of reinforcement learning was in the board game Go. Researchers thought that Go was much harder than chess for machines to master. The company DeepMind, now Google DeepMind, used reinforcement learning to create AlphaGo. AlphaGo defeated top Go player Lee Sedol in a five-match game in 2016.

A more recent example is the use of reinforcement learning to make chatbots such as ChatGPT more helpful. Reinforcement learning is also being used to improve the reasoning capabilities of chatbots.

Reinforcement learning’s origins

However, none of these successes could have been foreseen in the 1980s. That is when Barto and his then-PhD student Sutton proposed reinforcement learning as a general problem-solving framework. They drew inspiration not only from animal psychology but also from the field of control theory, the use of feedback to influence a system’s behavior, and optimization, a branch of mathematics that studies how to select the best choice among a range of available options. They provided the research community with mathematical foundations that have stood the test of time. They also created algorithms that have now become standard tools in the field.

It is a rare advantage for a field when pioneers take the time to write a textbook. Shining examples like The Nature of the Chemical Bond by Linus Pauling and The Art of Computer Programming by Donald E. Knuth are memorable because they are few and far between. Sutton and Barto’s Reinforcement Learning: An Introduction was first published in 1998. A second edition came out in 2018. Their book has influenced a generation of researchers and has been cited more than 75,000 times.

Reinforcement learning has also had an unexpected impact on neuroscience. The neurotransmitter dopamine plays a key role in reward-driven behaviors in humans and animals. Researchers have used specific algorithms developed in reinforcement learning to explain experimental findings in people and animals’ dopamine system.

Barto and Sutton’s foundational work, vision and advocacy have helped reinforcement learning grow. Their work has inspired a large body of research, made an impact on real-world applications, and attracted huge investments by tech companies. Reinforcement learning researchers, I’m sure, will continue to see further ahead by standing on their shoulders.

Ambuj Tewari is a professor of statistics at the University of Michigan.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/91312953/reinforcement-learning-teaching-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 5mo | 8 апр. 2025 г., 17:50:04


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

The ‘cortisol cocktail’ is blowing up on TikTok. Does it really work?

Rather than a headache and hangxiety, a new viral cocktail recipe is claiming to lower cortisol levels and reduce stress.

The nonalcoholic drink, known as the “

3 сент. 2025 г., 17:30:08 | Fast company - tech
This one line from Google’s antitrust ruling could reshape every Big Tech case

Google dodged a bullet Tuesday when a federal judge ruled the company does no

3 сент. 2025 г., 17:30:07 | Fast company - tech
Grab’s $20 billion playbook for becoming a super app

Grab is a rideshare service-turned superapp, not available in the U.S. but rapidly growing in Southeast Asia. It’s even outmaneuvered global players like Uber to reach a valuation north of $20 bil

3 сент. 2025 г., 15:20:04 | Fast company - tech
Kids aren’t reading for pleasure—and more than tech is to blame

A quarter-century ago, David Saylor shepherded the epic Harry Potter fantasy series onto U.S. bookshelves. As creative director of

3 сент. 2025 г., 12:50:11 | Fast company - tech
Samsung’s Galaxy Z Fold7 ruined other foldables for me—including mine

There’s no other phone I’d rather be using right now than Samsung’s Galaxy Z Fold7—and that’s a problem.

I’ve been a foldable phone appreciator for a while now, and a couple of years ago

3 сент. 2025 г., 12:50:10 | Fast company - tech
Fantasy football nerds are using AI to get an edge in their leagues this year

This fantasy football season, Aaron VanSledright is letting his bot call the shots.

Ahead of the NFL season, the Chicago-based cloud engineer built a custom

3 сент. 2025 г., 12:50:09 | Fast company - tech
Your phone’s ‘Share’ button doesn’t get enough love

One of the most powerful buttons on your phone is also one of the easiest to ignore.

I’m referring to the humble “Share” button, a mainstay of both iOS and Android that unloc

3 сент. 2025 г., 12:50:06 | Fast company - tech