How calls for AI safety could wind up helping heavyweights like OpenAI

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

More AI researchers speak out against OpenAI’s safety practices

For months, I’ve been writing about concerns that OpenAI, under Sam Altman’s leadership, is far more excited about pushing out new AI products than doing the hard work of making them safe. Well, those safety practices are now a national news story. Last week, Jan Leike, who led the superalignment team at OpenAI, left the company for that reason. This week a group of five ex-OpenAI researchers have signed an open letter saying OpenAI and other AI companies aren’t serious enough about safeguarding large AI models . . . As AI models became a red-hot investment opportunity, research labs have stopped openly sharing their safety research with the broader community. “AI companies . . . currently have only weak obligations to share some of this information with governments, and none with civil society,” the letter reads. 

The letter demands that AI companies act with more transparency about potential harms from their AI models and about the safety work meant to mitigate the risk. It also calls on the companies to stop using broad confidentiality agreements in order to prevent whistleblowers from speaking out. (Several other current OpenAI companies signed anonymously, fearing reprisals, as well as one current and one former Google DeepMind researcher.)

“We’re proud of our track record providing the most capable and safest AI systems,” OpenAI spokesperson Lindsey Held said in a statement. Easy to say now, when AI tools are still far from having the intuition, reasoning ability, and agency to be truly dangerous. We’ve not yet seen an AI system act autonomously to, say, shut down the power grid or generate a recipe for a deadly bioweapon that can be made in somebody’s kitchen. 

Ultimately, companies such as OpenAI aren’t harmed by any of this hand-wringing over safety worries. In fact, they’re helped by it. This news cycle feeds the hype that AI models are on the cusp of achieving “artificial general intelligence,” which would mean models are generally better than human beings at thinking tasks (still aspirational today). And besides, if governments are moved to put tight regulations on AI development, it’ll only entrench the well-monied tech companies that have already built them.

Elon Musk’s AI ambitions remain mysterious, but urgent

CNBC reported this week that Elon Musk diverted a Tesla order of thousands of Nvidia H100 AI chips to his X social media company (formerly Twitter). Musk is CEO of both companies, a time-splitting arrangement that has rankled some Tesla investors. Musk fired back at the report, saying that Tesla was simply not ready to deploy the expensive and highly sought-after Nvidia chips, whereas X was. 

The whole ordeal points to the interconnected nature of Musk’s tech empire. Musk has been telling Tesla investors that he’s bulking up Tesla’s AI power with many more Nvidia GPU chips this year. Musk said in April that Tesla would increase its number of Nvidia’s flagship H100 chips from 35,000 to 85,000 by year-end. Tesla uses the Nvidia chips to develop and support the navigation systems in its cars, and for robotics research. 

And Musk has ordered at least 12,000 H100s for X, which uses the Nvidia chips to service content and ads. Musk’s company, xAI, currently uses some of X’s data center capacity for its research, so it’s likely to get access to at least some of the new H100s, too. Grok, xAI’s current AI product, is powered by just a text-based AI model, but the second generation of the model will likely be capable of processing images and sounds. 

Musk’s ambitions may go much further. He’s said he wants xAI, which has managed to attract some top AI talent, to focus on some very weighty science problems, such as modeling dark matter, black holes, or complex ecological systems now suffering from climate change. That requires lots of capital and computing power. Musk recently raised a new $6 billion funding round for xAI, and reports say he has plans to build a massive supercomputer, or “Gigafactory of compute,” possibly in partnership with Oracle. 

2024 ROIs could mean boom or bust for generative AI

If 2023 was the year that generative AI left the lab and went to work in the real world, then 2024 is when many C-suite types will be called on by their boards to start showing that those AI tools actually can result in real productivity gains and cost savings. The answer will most likely be a mixed bag.

“Interestingly, the rise of GenAI seems to have shaken many executives’ assessments of their company’s overall AI achievements,” said Boston Consulting CEO Christoph Schweizer in the summary of a new survey of client companies. “Between 2022 and 2024, the proportion of executives reporting their companies had implemented AI with impact declined from 37% to 10%.”

Part of the problem is the technology itself. While large language models (LLMs), and now multimodal models, can do some impressive things, they still fail in basic ways. Hallucinations continue. Many companies are still coming to grips with the infrastructure work needed to ground AI models in reliable corporate data. And AI models fail at something that humans are good at: We can continually learn new things and learn to apply new knowledge when we need to, whereas generative AI models’ training data goes up to a certain date past which they have no knowledge of the world. 

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91136562/openai-safety-practices-sam-altman?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1y | 6 giu 2024, 15:50:02


Accedi per aggiungere un commento

Altri post in questo gruppo

Is Apple getting ready to launch a PlayStation and Xbox competitor?

The Apple TV is probably my favorite device that Apple makes. While the Apple TV app is in dire need

2 ago 2025, 11:10:06 | Fast company - tech
This free Adobe tool offers Photoshop-strength background removal

Sometimes, the simplest photo feats are the most satisfying of all.

Me? I’ve lost count of the number of times I’ve needed to remove the background from an image for one reason or anothe

2 ago 2025, 11:10:04 | Fast company - tech
Google loses appeal in antitrust battle with Epic Games

A federal appeals court has upheld a jury verdict condemning Google’s Android app store as an illegal monopoly, clearing the way for a federal judge to enforce a potentially disruptive shake

1 ago 2025, 18:50:03 | Fast company - tech
Apple shares are up 2% after iPhone maker posts strong Q3 results

Apple shares rose 2% in premarket trading on Friday, after the

1 ago 2025, 16:30:05 | Fast company - tech
OpenAI pulls ChatGPT feature that showed personal chats on Google

OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversa

1 ago 2025, 14:20:02 | Fast company - tech
YouTube channels are being sold and repurposed to spread scams and disinformation, says new research

YouTubers dedicate their lives to building a following in hopes of creating and sustaining a livelihood. For top creators, the rewards are immense: MrBeast, the world’s biggest YouTuber, is

1 ago 2025, 11:50:06 | Fast company - tech
Tech policy could be smarter and less partisan if Congress hadn’t shut down this innovative program

Imagine if Congress had a clear-eyed guide to the technological upheavals shaping our lives. A team of in-house experts who could have flagged the risks of generative

1 ago 2025, 11:50:05 | Fast company - tech