Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
In filmmaking circles, AI is an everpresent topic of conversation. While AI will change filmmaking economics and could greenlight more experimental projects by reducing production costs, it also threatens jobs, intellectual property, and creative integrity—potentially cheapening the art form.
Google, having developed cutting-edge AI tools spanning script development to text-to-video generation, is positioned as a key player in AI-assisted filmmaking. At the center of Google’s cinema ambitions is Mira Lane, the company’s vice president of tech and society and its point person on Hollywood studio partnerships. I spoke with Lane about Google’s role as a creative partner to the film industry, current Hollywood collaborations, and how artists are embracing tools like Google’s generative video editing suite Flow for preproduction, previsualization, and prototyping. This interview has been edited for length and clarity.
Can you tell me about the team you’re running and your approach to AI in film?
I run a team called the Envisioning Studio. It sits within this group called Technology and Society. The whole ambition around the team is to showcase possibilities. . . . We take the latest technologies, latest models, latest products and we co-create with society because there’s an ethos here that if you’re going to disrupt society, you need to co-create with them, collaborate with them, and have them have a real say in the shape of the way that technology unfolds. I think too often a lot of technology companies will make something in isolation and then toss it over the fence, and then various parts of society are the recipients of it and they’re reacting to it. I think we saw that with language models that came out three years ago or so where things just kind of went into the industry and into society and people struggled with engaging with them in a meaningful way.
My team is very multidisciplinary. There are philosophers on the team, researchers, developers, product thinkers, designers, and strategists. What we’ve been doing with the creative industry, mostly film this year—last year we worked on music as well—is we’ve been doing fairly large collaborations. We bring filmmakers in, we show them what’s possible, we make things with them, we embed with them sometimes, we hear their feedback. Then they get to shape things like Flow and Veo that have been launched. I think that we’re learning a tremendous amount in that space because anything in the creative and art space right now has a lot of tension, and we want to be active collaborators there.
Have you been able to engage directly with the writers’ and actors’ unions?
We kind of work through the filmmakers on some of those. Darren Aronofsky, when we brought him in, actually engaged with the writers’ unions and the actors’ unions to talk about how he was going to approach filmmaking with Google—the number of staff and actors and the way they were going to have those folks embedded in the teams, the types of projects that the AI tools would be focused on. We do that through the filmmakers, and we think it’s important to do it actually in partnership with the filmmakers because it’s in context of what we’re doing versus in some abstract way. That’s a very important relationship to nurture.
Tell me about one of the films you’ve helped create.
Four weeks ago at Tribeca we launched a short film called Ancestra, created in partnership with Darren’s production company, Primordial Soup. It’s a hybrid type of model where there were live-action shots and AI shots. It’s a story about a mother and a baby who’s about to be born and the baby has a hole in its heart. It’s a short about the universe coming together to help birth that baby and to make sure that it survives. It was based on a true story of the director being born with a hole in her heart.
There are some scenes that are just really hard to shoot, and babies—you can’t have infants younger than 6 months on set. So how do you show an accurate depiction of a baby? We took photos from when she was born and constructed an AI version of that baby, and then generated it being held within the arms of a live actress as well. When you watch that film, you’ll see these things where it’s an AI-generated baby. You can’t tell that it’s AI-generated, but the scene is actually composed of half of it being live action, the other half being AI-generated.
We had 150 people, maybe close to 200 working on that short film—the same number of people you would typically have working on a [feature-length] film. We saw some shifts in roles and new types of roles being created. There may even be an AI unit that’s part of these films. There’s usually a CGI unit, and we think there’s probably going to be an AI unit that’s created as well.
It sounds like you’re trying to play a responsible role in how this impacts creators. What are the fruits of that approach?
We want to listen and learn. It’s very rare for a technology company to develop the right thing from the very beginning. We want to co-create these tools. because if they’re co-created they’re useful and they’re additive and they’re an extension and augmentation, especially in the creative space. We don’t want people to have to contort around the technology. We want the technology to be situated relative to what they need and what people are trying to do.
There’s a huge aspect of advancing the science, advancing the latest and greatest model development, advancing tooling. We learn a lot from engaging with . . . filmmakers. For example, we launched Flow [a generative video editing suite] and as we were launching it and developing it, a lot of the feedback from our filmmakers was, “Hey, this tool is really helpful, but we work in teams.” So how can you extend this to be a team-based tool instead of a tool that’s for a single individual? We get a lot of really great feedback in terms of just core research and development, and then it becomes something that’s actually useful.
That’s what we want to do. We want something that is helpful and useful and additive. We’re having the conversations around roles and jobs at the same time.
How is this technology empowering filmmakers to tell stories they couldn’t before?
In the film industry, they’re struggling right now to get really innovative films out because a lot of the production studios want things that are guaranteed hits, and so you’re starting to see certain patterns of movies coming out. But filmmakers want to tell richer stories. With the one that we launched at Tribeca, the director was like, “I would never have been able to tell this story. No one would have funded it and it would have been incredibly hard to do. But now with these tools I can get that story out there.” We’re seeing a lot of that—people generating and developing things that they would not have been funded for in the past, but now that gets great storytelling out the door as well. It’s incredibly empowering.
These tools are incredibly powerful because they reduce the costs of some of the things that are really hard to do. Certain scenes are very expensive. You want to do a car chase, for example—that’s a really expensive scene. We’ve seen some people take these tools and create pitches that they can then take to a studio and say, “Hey, would you fund this? Here’s my concept.” They’re really good at the previsualization stage, and they can kind of get you in the door. Whereas in the past, maybe you brought storyboards in or it was more expensive to create that pitch, now you can do that pretty quickly.
Are we at the point where you can write a prompt and generate an entire film?
I don’t think the technology is there where you can write a prompt and generate an entire film and have it land in the right way. There is so much involved in filmmaking that is beyond writing a prompt. There’s character development and the right cinematography. . . . There’s a lot of nuance in filmmaking. We’re pretty far from that. If somebody’s selling that I think I would be really skeptical.
What I would say is you can generate segments of that film that are really helpful and [AI] is great for certain things. For short films it’s really good. For feature films, there’s still a lot of work in the process. I don’t think we’re in the stage where you’re going to automate out the artist in any way. Nobody wants that necessarily. Filmmaking and storytelling is actually pretty complex. You need good taste as well; there’s an art to storytelling that you can’t really automate.
Is there a disconnect between what Silicon Valley thinks is possible and what Hollywood actually wants?
I think everybody thinks the technology is further along than it is. There’s a perception that the technology is much more capable. I think that’s where some of the fear is actually, because they’re imagining what this can do because of the stories that have been told about these technologies. We just put it in the hands of people and they see the contours of it and the edges and what it’s good and bad at, and then they’re a little less worried. They’re like, “Oh, I understand this now.”
That said, I look at where the technology was two years ago for film and where it is now. The improvements have been remarkable. Two years ago every [generated] film had six fingers and everything was morphed and really not there—there was no photorealism. You couldn’t do live-action shots. And in two years we’ve made incredible progress. I think in another two years, we’re going to have another big step change. We have to recognize we’re not as advanced as we think we are, but also that the technology is moving really fast. These partnerships are important because if we’re going to have this sort of accelerated technology development, we need these parts of our society that are affected to be deeply involved and actively shaping it so that the thing we have in two years is what is actually useful and valuable in that industry.
What kinds of scenes or elements are becoming easier to create with AI?
Anything that is complex that you tend to see a lot of, those types of things start to get easier because we have a lot of training data around that. If you’ve seen lots of movies with car chases in them. There are scenes of the universe—we’ve got amazing photography from the Hubble telescope. We’ve got great microscopic photography. All of those types of things that are complicated and hard to do in real life, those you can generate a lot easier because we have lots of examples of those and it’s been done in the past.
The ones that are hard are ones where you want really strong eye contact between characters, and where the characters are showing a more complex range of emotions.
How would you describe where we’re at with the uptake of these tools in the industry?
I think that we’re in a state where there’s a lot of experimentation. It’s kind of that stage where there’s something new that’s been developed and what you tend to do when there’s something new is you tend to try to re-create the past—what you used to do with [older] tools. We’re in that stage where I think people are trying to use these new tools to re-create the same kinds of stories that they used to tell, but the real gem is when you jump past that and you do new types of things and new types of stories.
I’ll give you one example. Brian Eno did a set of generative films; every time you went to the theater you saw a different version of that film. It was generated, it was different, it was unique. It still had the same backbone but it was a different story every time you saw it. That’s a new type of storytelling. I think we’re going to see more types of things like that. But first we have to get through this phase of experimentation and understanding the tools, and then we’ll get to all the new things we can do with it.
More AI coverage from Fast Company:
- Google is indexing ChatGPT conversations, potentially exposing sensitive user data
- How Cloudflare declared war on AI scrapers
- The Vogue AI model backlash isn’t dying down anytime soon
- This AI startup lets you ask data questions in plain English—and gets you answers in seconds
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Inicia sesión para agregar comentarios
Otros mensajes en este grupo.

A federal appeals court has upheld a jury verdict condemning Google’s Android app store as an illegal monopoly, clearing the way for a federal judge to enforce a potentially disruptive shake

Apple shares rose 2% in premarket trading on Friday, after the

OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversa




New technologies usually follow the technology adoption life cycle. Innovators and early adopters