OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversations were becoming publicly accessible.
Earlier this week, Fast Company revealed that private ChatGPT conversations—some involving highly sensitive topics like drug use and sexual health—were unexpectedly showing up in Google search results. The issue appeared to stem from arguably vague language in the app’s “Share” feature, which included an option that may have misled users into making their chats publicly searchable.
When users clicked “Share,” they were presented with an option to tick a box labeled “Make this chat discoverable.” Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results.
Within hours of the backlash spreading on social media, OpenAI pulled the feature and began working to scrub exposed conversations from search results.
“Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” said Dane Stuckey, OpenAI’s chief information security officer, in a post on X. “We’re also working to remove indexed content from the relevant search engines.”
We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat… pic.twitter.com/mGI3lF05Ua
— DANΞ (@cryps1s) July 31, 2025
Stuckey’s comments mark a reversal from the company’s stance earlier this week, when it maintained that the feature’s labeling was sufficiently clear.
Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, commended OpenAI for its prompt response once it became clear that users were unintentionally sharing sensitive content. “We know that companies will make mistakes sometimes, they may implement a feature on a website that users don’t understand and impact their privacy or security,” she says. “It’s great to see swift and decisive action from the ChatGPT team here to shut that feature down and keep user’s privacy a top priority.”
In his post, OpenAI’s Stuckey characterized the feature as a “short-lived experiment.” But Carissa Véliz, an AI ethicist at the University of Oxford, says the implications of such experiments are troubling.
“Tech companies use the general population as guinea pigs,” she says. “They do something, they try it out on the population, and see if somebody complains.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

Forget Cowboy Carter or the Eras tour, the hottest ticket this year is for your favorite podcast.
Content creator tours sold nearly 500% more tickets this year compared to 20

In late July, the Trump administration released its long-awaited AI Action Pla

Matthew Williams has slept very little since he learned about Sacha Stone’s plan to build a “sovereign” micronation on 60 acres of land near his home in rural Tennessee. What began as a quic

Let’s be honest: Your phone is a jerk. A loud, demanding, little pocket-size jerk that never stops buzzing, dinging, and begging for your attention. It’s the first thing you see in the


I don’t know if you’ve noticed, but email scams are getting surprisingly sophisticated.
We’ve had a handful of instances here at The Intelligence International Headquarters where we’ve h

Interest in virtual private networks (VPNs) has surged in America and Europe this year. Countries on both sides of the Atlantic have recently enacted new age-verification laws designed to prevent