Google is indexing ChatGPT conversations, potentially exposing sensitive user data

Google is indexing conversations with ChatGPT that users have sent to friends, families, or colleagues—turning private exchanges intended for small groups into search results visible to millions.

A basic Google site search using part of the link created when someone proactively clicks “Share” on ChatGPT can uncover conversations where people reveal deeply personal details, including struggles with addiction, experiences of physical abuse, or serious mental health issues—sometimes even fears that AI models are spying on them. While the users’ identities aren’t shown by ChatGPT, some potentially identify themselves by sharing highly specific personal information during the chats.

A user might click “Share” to send their conversation to a close friend over WhatsApp or to save the URL for future reference. It’s unlikely they would expect that doing so could make it appear in Google search results, accessible to anyone. It’s unclear whether those affected realize their conversations with the bot are now publicly accessible after they click the Share button, presumably thinking they’re doing so to a small audience.

Nearly 4,500 conversations come up in results for the Google site search, though many don’t include personal details or identifying information. This is likely not the full count, as Google may not index all conversations. (Because of the personal nature of the conversations, some of which divulge highly personal information including users’ names, locations, and personal circumstances, Fast Company is choosing not to link to, or describe in significant detail, the conversations with the chatbot.)

The finding is particularly concerning given that nearly half of Americans in a survey say they’ve used large language models for psychological support in the last year. Three-quarters of respondends sought help with anxiety, two in three looked for advice on personal issues, and nearly six in 10 wanted help with depression. But unlike the conversations between you and your real-life therapist, transcripts of conversations with the likes of ChatGPT can turn up in a simple Google search.

Google indexes any content available on the open web and site owners are able to remove pages from search results. ChatGPT’s shared links are not intended to appear in search by default and must be manually made discoverable by users, who are also warned not to share sensitive information and can delete shared links at any time. (Both Google and OpenAI declined Fast Company’s requests for comment.)

One user described in detail their sex life and unhappiness living in a foreign country, claiming they were suffering from post-traumatic stress disorder (PTSD) and seeking support. They went into precise details about their family history and interpersonal relationships with friends and family members.

Another conversation discusses the prevalence of psychopathic behaviors in children and at what age they can show, while another user discloses they are a survivor of psychological programming and are looking to deprogram themselves to mitigate the trauma they felt.

“I’m just shocked,” says Carissa Veliz, an AI ethicist at the University of Oxford. “As a privacy scholar, I’m very aware that that data is not private, but of course, ‘not private’ can mean many things, and that Google is logging in these extremely sensitive conversations is just astonishing.”

Similar concerns have been raised with competing chatbots, including those run by Meta, which began sharing user queries with its AI systems in a public feed visible within its AI apps. Critics then said user literacy was not high enough to recognize the dangers of sharing private information publicly—something that later proved to be correct as personal details surfaced on the social feed. At the time, online safety experts highlighted worries about the disparity between how users think app functionalities work, and how the companies running the apps actually make them work.

Veliz fears that this is an indication of the approach we’re going to see big tech taking when it comes to privacy. “It’s also further confirmation that this company, OpenAI, is not trustworthy, that they don’t take privacy seriously, no matter what they say,” she says. “What matters is what they do.”

OpenAI CEO Sam Altman warned earlier this month that users shouldn’t share their most personal details with ChatGPT because OpenAI “could be required to produce that” if compelled legally to do so by a court. “I think that’s very screwed up”, he added. The conversation, with podcaster Theo Von, didn’t discuss users’ conversations being willingly opened up for indexing by OpenAI.

“People expect they can use tools like ChatGPT completely privately,” says Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, “but the reality is that many users aren’t fully grasping that these platforms have features that could unintentionally leak their most private questions, stories, and fears.”

https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Utworzony 2d | 30 lip 2025, 20:30:04


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

Apple shares are up 2% after iPhone maker posts strong Q3 results

Apple shares rose 2% in premarket trading on Friday, after the

1 sie 2025, 16:30:05 | Fast company - tech
OpenAI pulls ChatGPT feature that showed personal chats on Google

OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversa

1 sie 2025, 14:20:02 | Fast company - tech
YouTube channels are being sold and repurposed to spread scams and disinformation, says new research

YouTubers dedicate their lives to building a following in hopes of creating and sustaining a livelihood. For top creators, the rewards are immense: MrBeast, the world’s biggest YouTuber, is

1 sie 2025, 11:50:06 | Fast company - tech
Tech policy could be smarter and less partisan if Congress hadn’t shut down this innovative program

Imagine if Congress had a clear-eyed guide to the technological upheavals shaping our lives. A team of in-house experts who could have flagged the risks of generative

1 sie 2025, 11:50:05 | Fast company - tech
The trouble with Agent, ChatGPT’s new web-browsing AI

Hello again, and thanks for reading Fast Company’s Plugged In.

When you think about it, training

1 sie 2025, 11:50:04 | Fast company - tech
What is ‘AI veganism,’ and will we see more of it?

New technologies usually follow the technology adoption life cycle. Innovators and early adopters

1 sie 2025, 09:30:07 | Fast company - tech
TikTok’s new singing contest is ‘American Idol’ for the viral generation

If you’ve ever fancied yourself a contestant on American Idol, you can now audition for the newest singing competition—without ever leaving home.

In partnership with iHeartRadio

1 sie 2025, 00:20:08 | Fast company - tech