Companies are struggling to keep private data safe from generative AI, Cisco says

Shortly after ChatGPT’s public launch, a slew of corporate giants, from Apple to Verizon, made headlines when they announced bans on the use of the technology at work. But a new survey finds that those companies were far from outliers.

According to a new report from Cisco, which polled 2,600 privacy and security professionals last summer, more than one in four companies have banned the use of generative AI tools at work at some point. Another 63% of respondents said they’ve limited what data their employees can enter into those systems, and 61% have restricted which generative AI tools employees at their companies can use.

At the heart of these restrictions is the concern that employees may inadvertently leak private company data to a third party like OpenAI, which can then turn around and use that data to further train its AI models. In fact, 68% of respondents listed said they’re concerned about that kind of data sharing. OpenAI does offer companies access to a paid enterprise product, which promises to keep business data private. But the free, public-facing version of ChatGPT and other generative AI tools like Google Bard offer far fewer guardrails.

That, says Cisco chief legal officer Dev Stahlkopf, can leave companies’ internal information vulnerable. “With the influx of AI use cases, an enterprise has to consider the implications before opening a tool for employee use,” says Cisco chief legal officer Dev Stahlkopf, noting that Cisco conducts AI impact assessments for all new AI products from third parties. “Every company needs to make their own assessments of their risk and risk tolerance, and for some companies prohibiting use of these tools may make sense.”

Companies like Salesforce have attempted to turn this uncertainty into a market opportunity, rolling out products that promise to remove sensitive data from being stored by the system and screen for toxicity in model responses. And yet, it’s clear the popularity of off-the-shelf tools like ChatGPT is already causing headaches for corporate privacy professionals. Despite the restrictions the majority of companies have enacted, the survey found that 62% of respondents have entered information about internal processes into generative AI tools. Another 42% say they’ve entered non-public company information into these tools, and 38% say they’ve put customer information into them, as well.

But it’s not just employees leaking private data that businesses are worried about. According to the survey, the biggest concern among security and privacy professionals when it comes to generative AI is that the AI companies are using public data to train their models in ways that infringe on their businesses’ intellectual property. (In addition, 58% see job displacement as a risk.)

Already, the IP issue is bubbling up in the courts. Last month, The New York Times sued OpenAI over allegations that the AI giant used the Times’ news articles to train the models that run its chatbot. OpenAI has said the suit is “without merit” and that training using those articles is fair use, legally speaking. The suit joins a mounting number of cases, brought by the likes of comedian Sarah Silverman and others, which make similar infringement claims against companies including Meta and Stability AI.

The survey results suggest that, for the vast majority of companies, addressing these privacy risks — both to their own data and their clients’ data — is a top priority, and many seem to welcome legislation that would enshrine privacy protections into law. While the U.S. has yet to pass long-promised federal privacy legislation, Cisco’s global survey found that some 80% of respondents said privacy legislation in their region had actually helped their companies, despite the increased investment required.

“Organizations believe the return on privacy investment exceeds spending,” Stahlkopf says. “Organizations that treat privacy as a business imperative, and not just as a compliance exercise, will benefit in this era of AI.”

https://www.fastcompany.com/91016367/companies-are-struggling-to-keep-private-data-safe-from-generative-ai-cisco-says?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 1y | 25.01.2024, 13:40:02


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

GameStop’s Nintendo Switch 2 stapler sells for more than $100,000 on eBay after viral mishap

From being the face of memestock mania to going viral for inadvertently stapling the screens of brand-new video game consoles, GameStop is no stranger to infamy.

Last month, during the m

11.07.2025, 12:50:04 | Fast company - tech
Don’t take the race for ‘superintelligence’ too seriously

The technology industry has always adored its improbably audacious goals and their associated buzzwords. Meta CEO Mark Zuckerberg is among the most enamored. After all, the name “Meta” is the resi

11.07.2025, 12:50:02 | Fast company - tech
Why AI-powered hiring may create legal headaches

Even as AI becomes a common workplace tool, its use in

11.07.2025, 12:50:02 | Fast company - tech
Gen Zers are posting their unemployment era on TikTok—and it’s way too real

Finding a job is hard right now. To cope, Gen Zers are documenting the reality of unemployment in 2025.

“You look sadder,” one TikTok po

11.07.2025, 10:30:04 | Fast company - tech
The most effective AI tools for research, writing, planning, and creativity

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

11.07.2025, 10:30:04 | Fast company - tech
Tesla sets annual meeting for November amid shareholder pressure

Tesla has scheduled an annual shareholders meeting for November, one day after the

10.07.2025, 20:40:02 | Fast company - tech