The Scarlett Johansson deception is part of a pattern for OpenAI and Sam Altman

OpenAI CEO Sam Altman asked the actress Scarlett Johansson—who famously voiced an AI assistant in the 2013 film Her—to do the voice for ChatGPT. She said no. So OpenAI concocted a voice that sounds a lot like that of the actress, and used it without telling her. Now the actress has lawyered up and OpenAI has egg on its face. (It’s since removed the Her voice from its chatbot.) 

Altman’s treatment of Johansson is more than an isolated “self-own” or PR flub. Seen in the context of some other milestones in Altman’s leadership, it looks like part of a larger pattern. 

Only about six months ago OpenAI’s board of directors fired Altman because he was “not consistently candid in his communications” with them. (Altman was soon reinstated at the insistence of OpenAI’s investors and employees). A source who knows him tells me Altman often “says one thing and does another.”

Altman has talked about the importance of safety research, but he’s been accused of hurrying new AI products to market without spending enough time making sure they’re safe. The latest voicing of that accusation came from Jan Leike, who recently left the company (along with cofounder Ilya Sutskever) after the “super-alignment” safety group he led was disbanded. 

“[O]ver the past years, safety culture and processes have taken a backseat to shiny products,” Leike wrote on X (formerly Twitter). “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Just last year, when OpenAI announced the super-alignment team, Altman said the company would commit 20% of its computing power to its alignment work. But, as Fortune reports, citing half a dozen sources with knowledge, that never happened.

OpenAI was the first to figure out that by dramatically scaling up model sizes, training data, and  computing power, AI models could start demonstrating uncanny skill. To get enough training data, the company vacuumed up vast amounts of data from the web—without permission or compensation for the publishers of the content. OpenAI says the practice is covered under “fair use,” in the copyright law. But now that its method of harvesting training data is better-known, it regularly pays sites for the data—most recently Reddit—and is being sued by the New York Times for feeding its models verbatim content from the news site. 

With the success of its “supersizing” approach, Altman and company began closing off access to its research, which it once shared openly with the AI community. The investors that began pouring money into the startup insisted that the research be treated like valuable intellectual property and locked away. 

Sutskever and Leike may have been among the last standard-bearers for the old OpenAI’s and its stated intent to “build artificial general intelligence that is safe and benefits all of humanity.” Since the leadership imbroglio last November, Altman and his allies, and OpenAI’s venture capital investors, very likely now set the company’s agenda. 

Investors may admire Altman, who is, after all, an investor himself. They may see his “better to ask forgiveness than permission” approach to Johansson’s voice, and publishers’ content, as examples of acting unilaterally to get something done. They may see the CEO’s job as putting a pleasing public face on a business that sometimes involves some less-than-savory practices in the background. 

Should we worry that the company that’s ushering in “super-intelligence as a service” doesn’t seem entirely honest or ethical? Can we trust this company to make chatbots that are honest? Can we be sure its products can’t be used to create bioweapons or subvert our elections?

https://www.fastcompany.com/91129670/scarlett-johansson-deception-openai-sam-altman?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1y | 22 mag 2024, 15:40:03


Accedi per aggiungere un commento

Altri post in questo gruppo

LinkedIn’s Aneesh Raman says the career ladder is disappearing in the AI era

As AI evolves, the world of work is getting even better for the most c

26 lug 2025, 12:10:04 | Fast company - tech
This Florida company’s imaging tool helps speed up natural disaster recovery efforts

It has, to date, been a calm hurricane season in the state of Florida, but any resident of the Southeast will tell you that the deeper into summer we go, the more dangerous it becomes.

T

25 lug 2025, 19:50:03 | Fast company - tech
TikTok reacts to alleged shoplifter detained after 7 hours in Illinois Target

TikTok has become obsessed with an alleged shoplifter who spent seven straight hou

25 lug 2025, 15:10:09 | Fast company - tech
Is it safe to install iOS 26 on older iPhones like the 11 and SE?

Apple says the upcoming iOS 26, expected in a polished “release” version in September, will support devices back to the iPhone 11 from September 2019 and second-generation iPhone SE from April 202

25 lug 2025, 15:10:08 | Fast company - tech
‘Democratizing space’ requires addressing questions of sustainability and sovereignty

India is on the moon,” S. Somanath, chairman of the Indian Space Research Organization, announced in

25 lug 2025, 10:30:06 | Fast company - tech
iPadOS 26 is way more Mac-like. Where does that lead?

Greetings, everyone, and welcome back to Fast Company’s Plugged In.

It was one of the best-received pieces of Apple news I can recall. At the company’s

25 lug 2025, 08:20:03 | Fast company - tech
Elon Musk says he’s bringing back Vine in AI form. Here’s what that could mean

Good news: Vine might be coming back. Bad news: in AI form, courtesy o

24 lug 2025, 22:50:08 | Fast company - tech