Adobe’s latest AI experiment generates music from text

This week, Adobe revealed an experimental audio AI tool to join its image-based ones in Photoshop. Described by the company as “an early-stage generative AI music generation and editing tool,” Adobe’s Project Music GenAI Control can create music (and other audio) from text prompts, which it can then fine-tune in the same interface.

Adobe frames the Firefly-based technology as a creative ally that — unlike generative audio experiments like Google’s MusicLM — goes a step further and skips the hassle of moving the output to external apps like Pro Tools, Logic Pro or GarageBand for editing. “Instead of manually cutting existing music to make intros, outros, and background audio, Project Music GenAI Control could help users to create exactly the pieces they need—solving workflow pain points end-to-end,” Adobe wrote in an announcement blog post.

The company suggests starting with text inputs like “powerful rock,” “happy dance” or “sad jazz” as a foundation. From there, you can enter more prompts to adjust its tempo, structure and repetition, increase its intensity, extend its length, remix entire sections or create loops. The company says it can even transform audio based on a reference melody.

Adobe says the resulting music is safe for commercial use. It’s also integrating its Content Credentials (“nutrition labels” for generated content), an attempt to be transparent about your masterpiece’s AI-assisted nature.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music,” Adobe Research scientist Nicholas Bryan wrote.

The project is a collaboration with the University of California, San Diego and the School of Computer Science, Carnegie Mellon University. Adobe’s announcement emphasized Project Music GenAI Control’s experimental nature. (It didn’t reveal much of its interface in the video above, suggesting it may not have a consumer-facing UI yet.) So you may have to wait a while before the feature (presumably) makes its way into Adobe’s Creative Cloud suite.

This article originally appeared on Engadget at https://www.engadget.com/adobes-latest-ai-experiment-generates-music-from-text-184019169.html?src=rss https://www.engadget.com/adobes-latest-ai-experiment-generates-music-from-text-184019169.html?src=rss
Creato 1y | 1 mar 2024, 20:50:30


Accedi per aggiungere un commento

Altri post in questo gruppo

Threads users still barely click links

Two years in, Threads is starting to look more and more like the most viable challenger to X. It passed 350 million monthly users earlier this year and Mark Zuckerberg has predicted it could be Met

14 lug 2025, 17:40:42 | Engadget
The UK needs to deal with its e-scooter problem

E-scooters could be a vital tool to eliminate unnecessary car journeys, cutting emissions and journey times. Unfortunately, the UK is the last major European nation to not allow them to be ridden o

14 lug 2025, 15:30:10 | Engadget
Quicken Simplifi plans are half off right now

Budgeting is really hard. Basics like groceries, rent and transportation are expensive enough without all the tempting extras like a nice dinner or new gaming console. It's all too easy to end the

14 lug 2025, 15:30:08 | Engadget