Nvidia on Monday showed a new artificial intelligence model for generating music and audio that can modify voices and generate novel sounds — technology aimed at the producers of music, films and video games.
Nvidia, the world’s biggest supplier of chips and software used to create AI systems, said it does not have immediate plans to publicly release the technology, which it calls Fugatto, short for Foundational Generative Audio Transformer Opus 1.
It joins other technologies shown by startups such as Runway and larger players such as Meta Platforms that can generate audio or video from a text prompt.
Santa Clara, California-based Nvidia’s version generates sound effects and music from a text description, including novel sounds such as making a trumpet bark like a dog.
What makes it different from other AI technologies is its ability to take in and modify existing audio, for example by taking a line played on a piano and transforming it into a line sung by a human voice, or by taking a spoken word recording and changing the accent used and the mood expressed.
“If we think about synthetic audio over the past 50 years, music sounds different now because of computers, because of synthesizers,” said Bryan Catanzaro, vice president of applied deep learning research at Nvidia. “I think that generative AI is going to bring new capabilities to music, to video games and to ordinary folks that want to create things.”
While companies such as OpenAI are negotiating with Hollywood studios over whether and how the AI could be used in the entertainment industry, the relationship between tech and Hollywood has become tense, particularly after Hollywood star Scarlett Johansson accused OpenAI of imitating her voice.
Nvidia’s new model was trained on open-source data, and the company said it is still debating whether and how to release it publicly.
“Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t,” Catanzaro said. “We need to be careful about that, which is why we don’t have immediate plans to release this.”
Creators of generative AI models have yet to determine how to prevent abuse of the technology such as a user generating misinformation or infringing on copyrights by generating copyrighted characters.
OpenAI and Meta similarly have not said when they plan to release to the public their models that generate audio or video.
—Stephen Nellis, Reuters
Accedi per aggiungere un commento
Altri post in questo gruppo

Getting an email in the mid-’90s was kind of an event—somewhere between hearing an unexpected knock at the door and walking into your own surprise party. The white-hot novelty of electronic mail i


For well over a decade now, consumers have been used to new iPhones coming out in the fall, like clockwork. However, according to a series of reports, Apple may be planning to change its iPhone re

Booking travel has become a bit of a game—especially if you want to get the best possible prices and avoid getting ripped off.
That’s because hotels and airlines have developed the lovel

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility.
An all-hands meeting late

A study has confirmed what we all suspected: “K” is officially the worst text you can send.
It might look harmless enough, but this single letter has the power to shut down a conversatio

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.
According to S