Dubbing is terrible. Can AI fix it?

Just five years ago, when the movie Parasite won a Golden Globe for best foreign language film, Bong Joon Ho, its South Korean director, said in his acceptance speech that American audiences needed to get over their issue with the “one-inch-tall barrier of subtitles.” His point was that there’s a whole world of great cinema beyond English-language films, and we shouldn’t let subtitles be a deal-breaker. That compares to audio dubbing, the technique that places English dialogue over the moving lips of an actor speaking in another language.

Americans maintain their hesitancy around dubbed movies. In a 2021 survey, 76% of Americans said they preferred subtitling over dubbing. Compare that to European countries such as France, Italy, and Germany, where the majority of moviegoers prefer dubbing. Even younger generations in the U.S. are leaning toward subtitles, according to a 2024 Preply survey. 96% of Gen Z Americans prefer subtitles to dubbings, compared to just 75% of baby boomers.

But now, AI could change all that. Amazon just made a big bet on dubbing, introducing AI-driven audio translation to some of its Prime Video entertainment. It’s still a pilot, though there are signs for how successful the AI audio-translation program will be. Meanwhile, video startups including ElevenLabs and InVideo are also dipping their toe into dubbing. Yet, the question of quality remains: Will these efforts make dubbing more lifelike and artful, or will it simply make it more common?

The AI dubbing boom

Amazon is slowly introducing AI dubbing to its Prime Video content, having started with just 12 licensed movies and series, including the documentary El Cid: La Leyenda and the drama Long Lost, translating between English and Latin American Spanish. These translations aren’t exclusively performed by AI; Amazon still employing “localization professionals” for quality control. 

From the outside, it looks like Amazon is employing AI to up the quantity of dubs, but not necessarily the quality. Amazon declined to comment, but pointed to a public blog post, which provides some clues. The blog notes that Amazon is only creating new dubs, not modifying preexisting ones. In his statement, Prime Video VP of technology Raf Soltanovich emphasized making international titles more “accessible and enjoyable.” 

Reactions to Amazon’s new tech have been mixed. Futurism called it an “assault on cinema.” On Saturday Night Live, Michael Che joked that the tool needed to translate Sylvester Stallone. Lifehacker’s Jake Peterson tried the tool himself. While Peterson maintained that there was “no way [he] would genuinely enjoy watching an entire movie or series with an AI dub,” he admitted that some of the tech was impressive, like when the AI muffled its own voice for the marshmallow-stuffing chubby bunny challenge. 

But Amazon isn’t the only company investing in AI dubbing tools. ElevenLabs, most known for its AI voice generator, has its own dub software. So do a handful of other startups, including InVideo, Dubbing AI, and Dubverse. But all these tools—including Amazon’s—are still nascent. Even if their voices are monotone and robotic now, that could change in the coming months. 

Will dubbed media ever be watchable?

In the world of anime, there’s a common saying: “Subs not dubs.” The argument goes that an actor’s (or voice actor’s) performance is tied to their intonation and speaking style. Severing the voice from the body, and inserting a whole new voice in a new language, destroys the artistry. That’s not a problem for Western European audiences, where dubbing is often more common than subtitles. But, for Americans viewers, it can still be discomforting.

The expectation is that AI can help here. Audio generators can replicate the sound of another actor’s voice. In some ways, that’s scary: Much of the 2023 SAG strike revolved around protections against AI duplication. But, in the dubbing space, that offers promise. The viewer could hear the performance in the voice of the actor, but within their own language. AI tools have also been able to hear emotion in a voice; they could replicate that in the duplicated audio. 

We’ve seen early-stage versions of this quality-altering AI voice tool. Respeecher lets audio engineers tinker with accents and fix pronunciations. That’s the tool that caused a ruckus for The Brutalist and Emilia Pérez during awards season. But, at scale, this kind of audio manipulation and regeneration could have seismic industry effects. Voice actors would be out of work.

In their current form, subtitles still trump dubbing. But, with AI, that could all change sooner than we think. 


https://www.fastcompany.com/91294143/dubbing-is-terrible-can-ai-fix-it?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 3mo | 11. 3. 2025 22:10:03


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

The artists experimenting with camera glasses and bodycams

Barely anything that truly makes me pause on the internet is shot using traditional, modern camera tech. I appreciate the grainy texture of film photos and the fast, smooth zoom of a shitty camcord

10. 6. 2025 15:10:03 | Fast company - tech
How Austin became the robotaxi capital of America

The robotaxi race is heating up in Austin. A decade after Google’s self-driving car project quietly tested on the city’s

10. 6. 2025 12:40:06 | Fast company - tech
How Crunchyroll took over the anime world

How did Crunchyroll become the powerhouse of the anime world?

In this episode of FC Explains, we dive deep into how Crunchyroll transformed from a small streaming service to the global l

10. 6. 2025 10:30:04 | Fast company - tech
Why Apple iOS 26 might make you want to make phone calls again

Almost every article you’re going to read about Apple’s just-announced iOS 26 operating system f

9. 6. 2025 22:50:01 | Fast company - tech