POV: Why pausing AI development is a bad idea

The gloves are coming off in the fight over the future of AI.

On Tuesday, the Future of Life Institute, a futurist nonprofit backed by the Musk Foundation, published an open letter calling for a six-month pause on training AI systems more powerful than OpenAI’s leading GPT-4 service.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” declares the letter, which has been signed by several thousand people, including Elon Musk himself, Apple cofounder Steve Wozniak, AI researchers Yoshua Bengio and Gary Marcus, historian Yuval Noah Harari, and Pinterest co-founder Evan Sharp. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

While there’s no doubt that AI should be developed in a way that is safe, responsible and transparent, putting the most critical technology of our age in a timeout is an unviable solution that could weaken our country at a critical moment.

For starters, it would be an unprecedented move, coming just when AI is beginning to show incredible promise after decades of unfulfilled hype. It would also be nearly impossible to enforce and a gut punch to innovation—the engine of our economy.

While the letter has been signed by some notable AI experts, other AI researchers criticized the approach and said it overlooked harms and risks posed by current AI, like requiring more transparency of AI training data and decision-making of large language models. Computer scientist Andrew Ng, Founder of Google Brain, called the moratorium “a terrible idea” on Twitter because government intervention would be the only possible way to enforce it.

“I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help,” he tweeted. “Let’s balance the huge value AI is creating vs. realistic risks. To advance AI safety, regulations around transparency and auditing would be more practical and make a bigger difference.”

Imagine people asking Netscape, Microsoft, and Mozilla to stop the development of the Web browser back in the mid-1990s. Would that have been the right move to address real concerns about online child pornography and indecent speech? Absolutely not. Those issues were more effectively addressed by industry, lawmakers, the courts and regulators, ultimately being resolved through a landmark Supreme Court decision that enshrined the value of free speech on the Internet.

Second, the U.S. is engaged in a competition with China to lead the AI market. Thanks to recent innovation of U.S.-based OpenAI, other U.S. multinationals like Microsoft, Google and Meta, and a bevy of startups, the U.S. may have retaken the lead in this race in which experts said China was ahead just a few years ago. But the pace of AI innovation is accelerating at a rate not seen since the boom of mobile computing. Consider that it took just under four months for OpenAI to release GPT-4 after its groundbreaking release of ChatGPT.

If the U.S. and its leading corporations paused AI development for six months while China raced ahead, it would put our country at a disadvantage and create a potential opening for our primary global adversary. Imagine if China’s AI leapfrogged the US during this pause, and the long term harm that could bring to democracy and geopolitical security.

Third, even if we somehow paused development, what could be accomplished over this “AI summer”? Probably not that much because the governance of AI is an incredibly complex topic that requires robust inclusive discussion among multiple stakeholders to hammer out a new framework. It will take years to develop the foundation of this governance system, which will likely require changes in industry practices, regulatory policy and global laws.

There’s no doubt that we need a more focused effort and investment to align on the design for an effective governance system for responsible, trusted and explainable AI. And the rapid evolution of these systems means we should make it a greater priority of society, much like climate change has become a global imperative.

Technology has always outpaced our ability to manage it. But our history has shown the wiser approach is to parallel path the development of technology and its governance without stifling progress.

It’s worth noting the letter overlooked that much of this work is already happening. Leading providers of AI are taking AI safety and responsibility very seriously, developing risk mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback. The rapid release of the last two iterations of GPT has already led to major improvements in value, safety and responsibility.

Is it perfect? Of course not. Do these companies need more oversight? Of course they do. Innovation is messy. Mistakes are bound to happen, but they are also essential to the process of learning. As this debate swirls, Musk should remind himself of this quote he once shared: “If things are not failing, you are not innovating enough.”


Spencer Ante is the former head of insights at Meta, and author of Creative Capital: Georges Doriot and the Birth of Venture Capital.

https://www.fastcompany.com/90874897/pov-why-pausing-ai-development-is-a-bad-idea?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 31. 3. 2023 18:21:09


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

Apple shares are up 2% after iPhone maker posts strong Q3 results

Apple shares rose 2% in premarket trading on Friday, after the

1. 8. 2025 16:30:05 | Fast company - tech
OpenAI pulls ChatGPT feature that showed personal chats on Google

OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversa

1. 8. 2025 14:20:02 | Fast company - tech
YouTube channels are being sold and repurposed to spread scams and disinformation, says new research

YouTubers dedicate their lives to building a following in hopes of creating and sustaining a livelihood. For top creators, the rewards are immense: MrBeast, the world’s biggest YouTuber, is

1. 8. 2025 11:50:06 | Fast company - tech
Tech policy could be smarter and less partisan if Congress hadn’t shut down this innovative program

Imagine if Congress had a clear-eyed guide to the technological upheavals shaping our lives. A team of in-house experts who could have flagged the risks of generative

1. 8. 2025 11:50:05 | Fast company - tech
The trouble with Agent, ChatGPT’s new web-browsing AI

Hello again, and thanks for reading Fast Company’s Plugged In.

When you think about it, training

1. 8. 2025 11:50:04 | Fast company - tech
What is ‘AI veganism,’ and will we see more of it?

New technologies usually follow the technology adoption life cycle. Innovators and early adopters

1. 8. 2025 9:30:07 | Fast company - tech
TikTok’s new singing contest is ‘American Idol’ for the viral generation

If you’ve ever fancied yourself a contestant on American Idol, you can now audition for the newest singing competition—without ever leaving home.

In partnership with iHeartRadio

1. 8. 2025 0:20:08 | Fast company - tech