Is the letter calling for a pause on AI an impossible ask?

An open letter signed by Elon Musk, Steve Wozniak, Andrew Yang, and many others asks that companies like OpenAI (which Musk cofounded) stop releasing new AI models until the risks can be better understood and better managed. But the AI genie’s already well out of the bottle and expanding—and there may be no pausing that.

The concerns raised by the letter, in general, are valid. The core of the signatories’ argument is captured in this line: “[R]ecent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

Throughout 2023, an AI arms race has ensued, as large, well-monied technology companies have raced to release new generative AI models and applications such as chatbots and image-creation tools. All the while, the outputs of these models are often highly creative and unpredictable and cannot be fully explained even by the people who created them (i.e. “Where in its training data did the chatbot get the idea for this response?”). With so much of their complicated inner workings hidden within an opaque black box, it’s difficult to detect when and how the models—which are trained on text and images created by humans—are injecting harmful biases they’ve learned into their output.

The signatories of the “AI pause” letter also fear that the models will create floods of misinformation, take away good jobs, and eventually “outnumber, outsmart, obsolete, and replace us.”

The open letter calls for a moratorium of at least six months on development of AI systems “more powerful than GPT-4.” If such a pause cannot be enacted quickly, the signatories write, governments should step in and institute a moratorium, it proposes.

But, again, the genie’s out. OpenAI, for example, is already making its powerful GPT-4 model available to developers via an application programming interface (API), so they can embed it within their apps. It’s also begun allowing developers add plugins to its ChatGPT chatbot that call on both the developer’s proprietary knowledge (a travel company’s database of flights, for example) and OpenAI’s large language model (LLM) to generate specialized answers for users. And it’s within those public-facing apps and plugins that the danger may lie.

“The moratorium does nothing to address this—that’s what makes this not just counterproductive but just plain puzzling,” wrote Princeton computer science professor Arvind Narayanan on Twitter Wednesday.

This open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real, already occurring AI harms. I suspect that it will benefit the companies that it is supposed to regulate, and not society. Let’s break it down. ????https://t.co/akQozgMCya

— Arvind Narayanan (@random_walker) March 29, 2023

More problematically, much of the research and development work happening in generative AI has already been made available through open-source models, which may further accelerate the spread of the technology without addressing safety concerns.

A moratorium also assumes that the models of the future will get better at their core task (content generation), while getting worse at controlling biases in the models’ design and training data. Ex-Google CEO Eric Schmidt said during a recent Fast Company interview that improving the models, and reducing their harm, can be helped by releasing them to the public and opening them to the scrutiny of the scientific community. That’s why Schmidt thinks companies like OpenAI and Microsoft are acting ethically when Microsoft released its Bing Chat chatbot, despite the fact that it performed in unpredictable ways.

“This is how progress is made in society, you know; you put these things out and people play with them,” Schmidt said. “And if you look at when Microsoft did their version of Bing, they didn’t test it enough, and after a few days they had to restrict the number of sequential queries to five.” OpenAI also has been busy erecting safeguards around ChatGPT since its launch.

What Schmidt worries about, he said, are AI models that are released outside the public’s view, and released without the kind of restrictions Microsoft was quick to apply.

A number of experts said on Twitter Wednesday (the day the letter came out) that, in the end, the real beneficiaries of the letter and, perhaps, its moratorium, might be the well-monied AI companies it seeks to regulate.

“The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type,” AI pioneer Yann LeCun tweeted (with little sarcasm) on Thursday. LeCun now leads AI research at Meta. “Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.”

The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type.Imagine what could happen if commoners get access to books!They could read the Bible for themselves and society would be destroyed.

— Yann LeCun (@ylecun) March 30, 2023

The biggest problem with the letter, perhaps, is that it relies on the U.S. government to define and enforce a moratorium. Members of Congress have heard the media chatter about ChatGPT and perhaps have played with the chatbot. But for most (not all), that is the limit of their knowledge of new generative AI technology. Asking Congress, which has struggled to even pass a broad privacy law for social media users, to jump into a hands-on regulatory role within a strange, new technology space may be wishful thinking.

https://www.fastcompany.com/90874040/is-the-letter-calling-for-a-pause-on-ai-an-impossible-ask?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 30 mar. 2023, 20:20:58


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

30 years ago, ‘Hackers’ and ‘The Net’ predicted the possibilities—and horrors—of internet life

Getting an email in the mid-’90s was kind of an event—somewhere between hearing an unexpected knock at the door and walking into your own surprise party. The white-hot novelty of electronic mail i

11 mai 2025, 11:40:05 | Fast company - tech
Uber is hedging its bets when it comes to robotaxis

Uber CEO Dara Khosrowshahi is enthusiastic about the company’s pilot with Waymo. In

10 mai 2025, 14:50:05 | Fast company - tech
Apple may radically change its iPhone release schedule. Here are 3 business-boosting reasons why

For well over a decade now, consumers have been used to new iPhones coming out in the fall, like clockwork. However, according to a series of reports, Apple may be planning to change its iPhone re

10 mai 2025, 10:20:04 | Fast company - tech
How Google can save you money the next time you book travel

Booking travel has become a bit of a game—especially if you want to get the best possible prices and avoid getting ripped off.

That’s because hotels and airlines have developed the lovel

10 mai 2025, 10:20:03 | Fast company - tech
Uber staff revolts over return-to-office mandate

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility. 

An all-hands meeting late

10 mai 2025, 01:10:03 | Fast company - tech
Why ‘k’ is the most hated text message, according to science

A study has confirmed what we all suspected: “K” is officially the worst text you can send.

It might look harmless enough, but this single letter has the power to shut down a conversatio

9 mai 2025, 22:40:05 | Fast company - tech
SoundCloud faces backlash after adding an AI training clause in its user terms

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.

According to S

9 mai 2025, 20:30:02 | Fast company - tech