For some in Hollywood, as Silicon Valley’s AI models have become impossible to ignore, it’s better to have a seat at the table as these new technologies emerge, rather than sitting back and letting the tech titans take full control.
This, at least, is the impetus behind Asteria, the generative AI studio cofounded by the filmmaking couple of Bryn Mooser and Natasha Lyonne, who promote their company as using “ethical” AI. Lyonne has justified her embrace of the technology by explaining: “It’s better to get your hands dirty than pretend it’s not happening.” The company has faced some backlash, both because Lyonne (tastelessly, her detractors would argue) claimed the late David Lynch had endorsed AI, and because its flagship model is proprietary—meaning we have no way to verify that it is indeed trained only on licensed material (as Lyonne and co. say it is).
Meanwhile, James Cameron is on Stability AI’s board, and has &ab_channel=Meta">expressed his hope for using AI to make blockbuster filmmaking cheaper. Jason Blum’s Blumhouse Productions has partnered with Meta for AI testing and chatbots. Lionsgate signed a deal with Runway, an AI startup valued at $3 billion, to let the company train its model on the studio’s 20,000+ films and TV series; Runway also signed a deal with AMC.
This embrace of AI, though, puts the James Camerons and Natasha Lyonnes of the world at odds with industry peers who are opting to push back on these would-be robot overlords before they take over.
Studios are understandably wary of copyright infringement, especially since generative AI models trained on publicly available data can reproduce intellectual property—for example, creating an image of Elsa from Frozen upon request.
That concern is at the heart of several ongoing lawsuits against AI companies, including one from Disney and Universal against Midjourney, which includes dozens of side-by-side pictures comparing the studios’ own IP to Midjourney’s outputs.
Meanwhile, last year Disney formed an Office of Technology Enablement to oversee how the company can “responsibly” use AI in postproduction and VFX, among other initiatives. This demonstrates the balance Hollywood is trying to strike, with attempts to protect what’s theirs while ensuring they are not left behind by these technological developments.
Tensions are running high. Are you on board, or standing in the way? Nuanced decisions being made by studios, producers, investors, and talent right now will determine whether Hollywood will look recognizable in a decade.
Similar tensions have sprung up before in Hollywood. Past introductions of television, cable, streaming, and more have sent shivers down the spines of studios and their labor forces alike. In each instance, “when you are fighting for the right to continue running your business, it’s understandable that you would leave no stone unturned,” says Brandon Katz, director of insights and content strategy at Greenlight Analytics.
The issue is finding the right approach, as the use of AI in the creative industries remains deeply contentious and presents a different sort of multitiered threat. As Katz understands it, the studios are “trying to preserve as much as they can before the machine takeover,” as it seems clear that this technology is “inevitable.” This means everything from licensing content to train AI, cutting production costs by streamlining visual effects, dubbing and subtitling more efficiently to serve global markets, and flirting with full generative production. It is that final one, of course, that tends to draw the most scrutiny.
“We are enduring a painful contraction of the entertainment industry,” Katz says, “because legacy media doesn’t have the same money to play with that they once did, so these companies need to figure out cost-cutting moves. It is unfortunate that the result is not only job cuts but reliance and embracement of technology that might otherwise replace some of the creative human labor force.”
With this in mind, Hollywood’s unions are unsurprisingly deeply invested in how AI use in the industry develops, and the technology was key to the writers and actors strikes in 2023, when they won protections from nonconsensual cloning of actors and from AI scripts. Duncan Crabtree-Ireland, executive director of the Screen Actors Guild–American Federation of Television and Radio Artists, tells Fast Company that since then, the industry has largely taken a more “cautious” approach to the technology. He notes that “people think they have a right to their own image, but they don’t”—he should know, since he himself was subject of a deepfake video during the strikes. The union is advocating for the No Fakes Act in Congress, to protect individuals from the misuse of their likeness (some argue the Act could actually do more harm than good). As efforts like Asteria and Runway have emerged, he adds, “we want to make sure to talk to every single company that wants to use this technology.”
Meanwhile, various courts are regularly issuing (often contradictory) decisions, including a recent win for AI companies in California which ruled that these companies do not violate fair use law when they train their models on copyrighted material. As both the tech itself and the discourse and legislation around it rapidly develops, Crabtree-Ireland recognizes that the union represents “vast swaths of performers” with different attitudes, as many want to find ways to use AI as a tool, while others (he estimates 10 to 15%) would prefer to prohibit it completely. From the union’s perspective, though, “what we’re able to do best is focus on the core principles of informed consent and intended use,” so performers not only give their consent but are also told exactly how their likeness or image will be used by the technology.
The bigger question is whether audiences will accept what these companies intend to do with AI—and the way these backlashes have played out is not going unnoticed. Crabtree-Ireland says performers “have been protected by the uncanny valley for longer than we expected,” as even the best models today still look rather unnatural. Nevertheless, “it’d be irresponsible to keep counting on that, or to assume that audiences won’t [start to] respond to it.”
“We’re focused on making the right push, contractually and legally,” he adds. “We want to be able to channel how this technology is going to be used.”
Katz points out that AI is also a boon to the creator economy, and will almost certainly help to further close the gap between professional and amateur productions, as YouTubers like MrBeast have already proved. “Can they approximate 50 to 75% of what a Warner Bros. can do for a fraction of the cost?” Katz says about how online creators will make use of AI. “What kind of bite does that take out of Hollywood?”
The only thing that is certain as workers of all kinds struggle to carve out their place in the future of entertainment is how little anyone seems to know.
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe



The classic funding announcement post is getting the Gen Z treatment.
More startups, especially those led by young founders, are moving away from LinkedIn posts or X threads and turning

On a recent flight home to Cincinnati, I found myself in a Wi-Fi pickle.
Delta was offering free in-flight Wi-Fi for all SkyMiles members, but only after logging in through a web page. T


Often lost in the generally breathless coverage of generative AI, Chat

Technology can seem pretty mysterious at times, so it’s all too easy for misconceptions to spread.
That helps explain why I keep seeing technological myths propagate. Should you bury a w