Backlash is growing against GPT-4, and one AI ethics group wants the FTC to step in

An ethics group devoted to artificial intelligence has declared GPT-4 to be “a risk to public safety,” and is urging the U.S. government to investigate its maker OpenAI for endangering consumers.

The Center for AI and Digital Policy (CAIDP) filed its complaint on Thursday with the Federal Trade Commission (FTC), on the heels of an open letter earlier this week calling more generally for a moratorium on all generative AI. Some 1,200 researchers, tech executives, and others in the field signed that letter—including Apple cofounder Steve Wozniak, and (somewhat more head-scratchingly) OpenAI cofounder Elon Musk. It argued for a minimum of a six-month pause on progress to give humans a chance to step back and do a cost-benefit analysis of this technology that’s developing at breakneck pace and enjoying runaway success.

Marc Rotenberg, president of CAIDP, was among the letter’s signers. And now his own group has piled on by making the case that the FTC should take a hard look at OpenAI’s GPT-4—a product that presents a serious enough liability for OpenAI itself to have recognized its potential for abuse in such categories as “disinformation,” “proliferation of conventional and unconventional weapons,” and “cybersecurity.”

“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability,'” the complaint says. “OpenAI’s product GPT-4 satisfies none of those requirements,” it adds, before essentially calling the government to arms: “It is time for the FTC to act.”

GPT-4’s alleged risks in CAIDP’s complaint include the potential to produce malicious code, reinforce everything from racial stereotypes to gender discrimination, and expose users’ ChatGPT histories (which has happened once already) and even payment details. It argues that OpenAI has violated the FTC Act’s unfair and deceptive trade practices rules, and that the FTC should also look into GPT-4’s so-called hallucinations—when it falsely and often repeatedly insists a made-up fact is real—because they amount to “deceptive commercial statements and advertising.” CAIDP argues OpenAI released GPT-4 for commercial use “with full knowledge of these risks,” which is why a regulatory response is needed.

To resolve these issues, CAIDP asks the FTC to ban additional commercial deployment of the GPT model, and demand an independent assessment. It also wants the government to create a public reporting tool like the one consumers can use to file fraud complaints.

GPT-4 has attracted a near-messianic following in certain tech circles—a fervor that probably amplified the need critics feel to sound the alarm over generative AI’s ubiquity in culture. However, OpenAI’s way of carrying itself has also given critics ammo. OpenAI isn’t open source, so it’s a black box, some complain. Others note that it’s copying tech’s worst impulses in the areas that are visible, like using Kenyan laborers who earn less than $2 per hour to make ChatGPT less toxic, or by seemingly hiding behind a “research lab” halo to ward off calls for greater scrutiny.

OpenAI seems to have understood these stakes, and even predicted this day would come. For a while, CEO Sam Altman has been addressing broader fears of AI essentially being let off leash, admitting that “current generation AI tools aren’t very scary,” but we’re “not that far away from potentially scary ones.” He has acknowledged that “regulation will be critical.”

Meanwhile, Mira Murati, who as CTO leads the strategy behind how to test OpenAI’s tools in public, told Fast Company when asked about GPT-4 right before its launch: “I think less hype would be good.”

https://www.fastcompany.com/90873896/gpt4-generative-ai-ethics-backlash-ftc-center-for-ai-digital-policy?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 30 mar. 2023, 22:20:58


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

30 years ago, ‘Hackers’ and ‘The Net’ predicted the possibilities—and horrors—of internet life

Getting an email in the mid-’90s was kind of an event—somewhere between hearing an unexpected knock at the door and walking into your own surprise party. The white-hot novelty of electronic mail i

11 mai 2025, 11:40:05 | Fast company - tech
Uber is hedging its bets when it comes to robotaxis

Uber CEO Dara Khosrowshahi is enthusiastic about the company’s pilot with Waymo. In

10 mai 2025, 14:50:05 | Fast company - tech
Apple may radically change its iPhone release schedule. Here are 3 business-boosting reasons why

For well over a decade now, consumers have been used to new iPhones coming out in the fall, like clockwork. However, according to a series of reports, Apple may be planning to change its iPhone re

10 mai 2025, 10:20:04 | Fast company - tech
How Google can save you money the next time you book travel

Booking travel has become a bit of a game—especially if you want to get the best possible prices and avoid getting ripped off.

That’s because hotels and airlines have developed the lovel

10 mai 2025, 10:20:03 | Fast company - tech
Uber staff revolts over return-to-office mandate

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility. 

An all-hands meeting late

10 mai 2025, 01:10:03 | Fast company - tech
Why ‘k’ is the most hated text message, according to science

A study has confirmed what we all suspected: “K” is officially the worst text you can send.

It might look harmless enough, but this single letter has the power to shut down a conversatio

9 mai 2025, 22:40:05 | Fast company - tech
SoundCloud faces backlash after adding an AI training clause in its user terms

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.

According to S

9 mai 2025, 20:30:02 | Fast company - tech