OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company’s ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules.

The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn’t know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don’t notice. This phenomenon is called a hallucination. However, it’s one thing when these bots make up ingredients for a recipe and another thing entirely when they invent stuff about real people.

The complaint also indicates that OpenAI refused to help delete the false information, responding that it was technically impossible to make that kind of change. The company did offer to filter or block the data on certain prompts. OpenAI’s privacy policy says that if users notice the AI chatbot has generated “factually inaccurate information” about them that they can submit a “correction request”, but the company says that it “may not be able to correct the inaccuracy in every instance”, as reported by TechCrunch.

This is bigger than just one complaint, as the chatbot’s tendency toward making stuff up could run afoul of the region’s General Data Protection Regulation (GDPR), which governs how personal data can be used and processed. EU residents have rights regarding personal information, including a right to have false data corrected. Failure to comply with these regulations can accrue serious financial penalties, up to four percent of global annual turnover in some cases. Regulators can also order changes to how information is processed.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals,” Maartje de Graaf, NOYB data protection lawyer, said in a statement. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also brought up concerns regarding transparency on the part of OpenAI, suggesting that the company doesn’t offer information regarding where the data it generates on individuals comes from or if this data is stored indefinitely. This is of particular importance when considering data pertaining to private individuals.

Again, this is a complaint by an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI has acknowledged in the past that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and asked the organization to investigate the issue.

The company is facing a similar complaint in Poland, in which the local data protection authority began investigating ChatGPT after a researcher was unable to get OpenAI’s help with correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, with regard to transparency, data access rights and privacy.

There’s also Italy. The Italian data protection authority conducted an investigation into ChatGPT and OpenAI which concluded by saying it believes the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to make up fake stuff about people. The chatbot was actually banned in Italy before OpenAI made certain changes to the software, like new warnings for users and the option to opt-out of having chats be used to train the algorithms. Despite no longer being banned, the Italian investigation into ChatGPT continues.

OpenAI hasn’t responded to this latest complaint, but did respond to the regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company wrote. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss
Created 17d | Apr 29, 2024, 6:10:27 PM


Login to add comment

Other posts in this group

AT&T deal will make every phone a satellite phone

Soon, AT&T subscribers will have the option to ensure that they have access to cell service even in typical dead zones, like deep inside national parks or far-flung rural locations. The mobile carr

May 16, 2024, 11:50:12 AM | Engadget
The Morning After: In a bid to stop ban, TikTok creators are suing the US government

Eight TikTok creators are suing the US government in an effort to block a law requiring TikTok’s parent company, ByteDance, to sell the service or face a US-wide ban. The lawsuit claims the Protect

May 16, 2024, 11:50:10 AM | Engadget
Fujifilm’s medium-frame GFX 100S II is lighter, cheaper and AI-enhanced

Fujifilm’s successor to the GFX 100S, its 2021 medium format camera with terrific p

May 16, 2024, 7:30:05 AM | Engadget
Fujifilm's X-T50 has a special dial for film simulations

Fujifilm has unveiled the X-T50 APS-C mirrorless camera, a long-awaited follow-up to the consumer-friendly

May 16, 2024, 7:30:04 AM | Engadget
Ubisoft's planned free-to-play Division game is dead

Just over three years ago, Ubisoft announced

May 15, 2024, 9:50:18 PM | Engadget
Threads search will finally be usable with 'recent' tab rollout

Threads is inching closer to becoming an actually useful source for real-time news and updates. The app is finally rolling out the ability to search posts in order of recency, after

May 15, 2024, 9:50:17 PM | Engadget