Backlash is growing against GPT-4, and one AI ethics group wants the FTC to step in

An ethics group devoted to artificial intelligence has declared GPT-4 to be “a risk to public safety,” and is urging the U.S. government to investigate its maker OpenAI for endangering consumers.

The Center for AI and Digital Policy (CAIDP) filed its complaint on Thursday with the Federal Trade Commission (FTC), on the heels of an open letter earlier this week calling more generally for a moratorium on all generative AI. Some 1,200 researchers, tech executives, and others in the field signed that letter—including Apple cofounder Steve Wozniak, and (somewhat more head-scratchingly) OpenAI cofounder Elon Musk. It argued for a minimum of a six-month pause on progress to give humans a chance to step back and do a cost-benefit analysis of this technology that’s developing at breakneck pace and enjoying runaway success.

Marc Rotenberg, president of CAIDP, was among the letter’s signers. And now his own group has piled on by making the case that the FTC should take a hard look at OpenAI’s GPT-4—a product that presents a serious enough liability for OpenAI itself to have recognized its potential for abuse in such categories as “disinformation,” “proliferation of conventional and unconventional weapons,” and “cybersecurity.”

“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability,'” the complaint says. “OpenAI’s product GPT-4 satisfies none of those requirements,” it adds, before essentially calling the government to arms: “It is time for the FTC to act.”

GPT-4’s alleged risks in CAIDP’s complaint include the potential to produce malicious code, reinforce everything from racial stereotypes to gender discrimination, and expose users’ ChatGPT histories (which has happened once already) and even payment details. It argues that OpenAI has violated the FTC Act’s unfair and deceptive trade practices rules, and that the FTC should also look into GPT-4’s so-called hallucinations—when it falsely and often repeatedly insists a made-up fact is real—because they amount to “deceptive commercial statements and advertising.” CAIDP argues OpenAI released GPT-4 for commercial use “with full knowledge of these risks,” which is why a regulatory response is needed.

To resolve these issues, CAIDP asks the FTC to ban additional commercial deployment of the GPT model, and demand an independent assessment. It also wants the government to create a public reporting tool like the one consumers can use to file fraud complaints.

GPT-4 has attracted a near-messianic following in certain tech circles—a fervor that probably amplified the need critics feel to sound the alarm over generative AI’s ubiquity in culture. However, OpenAI’s way of carrying itself has also given critics ammo. OpenAI isn’t open source, so it’s a black box, some complain. Others note that it’s copying tech’s worst impulses in the areas that are visible, like using Kenyan laborers who earn less than $2 per hour to make ChatGPT less toxic, or by seemingly hiding behind a “research lab” halo to ward off calls for greater scrutiny.

OpenAI seems to have understood these stakes, and even predicted this day would come. For a while, CEO Sam Altman has been addressing broader fears of AI essentially being let off leash, admitting that “current generation AI tools aren’t very scary,” but we’re “not that far away from potentially scary ones.” He has acknowledged that “regulation will be critical.”

Meanwhile, Mira Murati, who as CTO leads the strategy behind how to test OpenAI’s tools in public, told Fast Company when asked about GPT-4 right before its launch: “I think less hype would be good.”

https://www.fastcompany.com/90873896/gpt4-generative-ai-ethics-backlash-ftc-center-for-ai-digital-policy?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 2y | 30 mar 2023, 22:20:58


Accedi per aggiungere un commento

Altri post in questo gruppo

These geeks are building an early warning system for disappearing government data

To a certain brand of policy wonk, January 31, 2025, is a day that will live in infamy. 

It had been nearly two weeks since President Donald Trump took office for the second time—days th

2 lug 2025, 13:20:06 | Fast company - tech
‘Creatives are going to be elevated’: Canva’s COO on how AI is transforming the artistic landscape

For over a decade, Canva has made design and publishing accessible to anyone. Now the company is wrestling with how to harness

2 lug 2025, 13:20:04 | Fast company - tech
I quit TikTok—and got my attention span back

For a few days, my finger would hover over the TikTok hole on my home screen. But

2 lug 2025, 10:50:08 | Fast company - tech
‘Bakery tourism’ is the sweet new travel trend for Gen Z and food lovers

How far would you travel in search of a sweet treat?

“Bakery tourism” is on the rise, with more and more people traveling—sometimes across the globe—in search of the perfect flaky croiss

2 lug 2025, 10:50:06 | Fast company - tech
AI chatbots are breaking the web—and forcing a 404 makeover

More than half of Americans now use a chatbot, with an increasing number of people replacing search engines w

2 lug 2025, 10:50:05 | Fast company - tech
What is the ‘pearl earring theory’? The TikTok trend blaming jewelry for being single

“Girl With a Pearl Earring” has taken on a new meaning on social media.

TikT

1 lug 2025, 21:10:02 | Fast company - tech