Companies are struggling to keep private data safe from generative AI, Cisco says

Shortly after ChatGPT’s public launch, a slew of corporate giants, from Apple to Verizon, made headlines when they announced bans on the use of the technology at work. But a new survey finds that those companies were far from outliers.

According to a new report from Cisco, which polled 2,600 privacy and security professionals last summer, more than one in four companies have banned the use of generative AI tools at work at some point. Another 63% of respondents said they’ve limited what data their employees can enter into those systems, and 61% have restricted which generative AI tools employees at their companies can use.

At the heart of these restrictions is the concern that employees may inadvertently leak private company data to a third party like OpenAI, which can then turn around and use that data to further train its AI models. In fact, 68% of respondents listed said they’re concerned about that kind of data sharing. OpenAI does offer companies access to a paid enterprise product, which promises to keep business data private. But the free, public-facing version of ChatGPT and other generative AI tools like Google Bard offer far fewer guardrails.

That, says Cisco chief legal officer Dev Stahlkopf, can leave companies’ internal information vulnerable. “With the influx of AI use cases, an enterprise has to consider the implications before opening a tool for employee use,” says Cisco chief legal officer Dev Stahlkopf, noting that Cisco conducts AI impact assessments for all new AI products from third parties. “Every company needs to make their own assessments of their risk and risk tolerance, and for some companies prohibiting use of these tools may make sense.”

Companies like Salesforce have attempted to turn this uncertainty into a market opportunity, rolling out products that promise to remove sensitive data from being stored by the system and screen for toxicity in model responses. And yet, it’s clear the popularity of off-the-shelf tools like ChatGPT is already causing headaches for corporate privacy professionals. Despite the restrictions the majority of companies have enacted, the survey found that 62% of respondents have entered information about internal processes into generative AI tools. Another 42% say they’ve entered non-public company information into these tools, and 38% say they’ve put customer information into them, as well.

But it’s not just employees leaking private data that businesses are worried about. According to the survey, the biggest concern among security and privacy professionals when it comes to generative AI is that the AI companies are using public data to train their models in ways that infringe on their businesses’ intellectual property. (In addition, 58% see job displacement as a risk.)

Already, the IP issue is bubbling up in the courts. Last month, The New York Times sued OpenAI over allegations that the AI giant used the Times’ news articles to train the models that run its chatbot. OpenAI has said the suit is “without merit” and that training using those articles is fair use, legally speaking. The suit joins a mounting number of cases, brought by the likes of comedian Sarah Silverman and others, which make similar infringement claims against companies including Meta and Stability AI.

The survey results suggest that, for the vast majority of companies, addressing these privacy risks — both to their own data and their clients’ data — is a top priority, and many seem to welcome legislation that would enshrine privacy protections into law. While the U.S. has yet to pass long-promised federal privacy legislation, Cisco’s global survey found that some 80% of respondents said privacy legislation in their region had actually helped their companies, despite the increased investment required.

“Organizations believe the return on privacy investment exceeds spending,” Stahlkopf says. “Organizations that treat privacy as a business imperative, and not just as a compliance exercise, will benefit in this era of AI.”

https://www.fastcompany.com/91016367/companies-are-struggling-to-keep-private-data-safe-from-generative-ai-cisco-says?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Utworzony 1y | 25 sty 2024, 13:40:02


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

$100,000, 100 streamers: IShowSpeed and Jynxzi’s Fortnite tournament is already drawing excitement

IShowSpeed and Jynxzi are teaming up to host a $100,000 Fortnite tournament, bringing together 100 top creators for what’s shaping up to be the biggest celebrity Fortnite match to date.

14 lip 2025, 19:40:06 | Fast company - tech
Zuckerberg announces Meta’s new AI data centers for superintelligence

Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive

14 lip 2025, 19:40:05 | Fast company - tech
Meta’s massive data center bet is a direct challenge to OpenAI and Alphabet

Meta may not currently lead the race for AI superintelligence, but it&

14 lip 2025, 19:40:04 | Fast company - tech
Antipasto-gate: How a $40 salad sparked viral small-town drama on TikTok

Southern small-town drama has made its way to TikTok. If you’re not familiar

14 lip 2025, 19:40:03 | Fast company - tech
How Sega’s surprise Saturn launch backfired—and changed gaming forever

In May of 1995, the video game industry hosted its first major trade show. Electronic Entertainment Expo (E3) was designed to shine a spotlight on games, and every major player wanted to stand in

14 lip 2025, 12:40:06 | Fast company - tech
What are ‘tokenized’ stocks, and why are trading platforms like Robinhood offering them?

Robinhood cofounder and CEO Vlad Tenev channeled Hollywood glamour last month in Cannes at an extravagantly produced event unveiling of the trading platform’s newest products, including a tokenize

14 lip 2025, 12:40:05 | Fast company - tech