What managers should know about the secrets threat of employees using ‘shadow AI’

ChatGPT became the poster child for generative AI earlier this year. From writing up business plans to explaining complex topics in layman’s terms, ChatGPT has been drafted in to help with just about anything and everything. And companies small and large have been scrambling to explore and reap the benefits of generative AI ever since.

But as this new chapter of AI innovation progresses at a dizzying pace, CEOs and leaders are at risk of overlooking a form of the technology that’s been slowly creeping in through the back door: Shadow AI.

Shadow AI is dangerously overlooked

Put simply, Shadow AI is when staff bolt AI tools onto their work systems to make life easier, unbeknownst to management. This quest for efficiency is, in most cases, well intentioned, but it’s opening companies up to a new realm of cyber security and data privacy issues.

Shadow AI is typically being embraced by staff looking to improve process efficiency and productivity, particularly when it comes to navigating monotonous tasks or laborious processes. That might mean they’re asking AI to scan through hundreds of PowerPoint decks to find key information, or asking it to synthesize the key points from meeting minutes.

As a rule, employees aren’t purposefully making your organization vulnerable. Quite the opposite. They’re simply streamlining tasks so they can tick more off their to-do list. But with over 1 million UK adults having already used generative AI at work, the risk is that more and more workers use models which have not been authorized for safe use by their employer, and risk data security in the process.

Two major risks

The risk of Shadow AI is two-fold.

First, employees may feed such tools sensitive company information, or leave company information open to be scraped whilst the technology is running in the background. For example, when an employee is using ChatGPT or Google Bard to streamline their productivity or clarify information, they could be inputting sensitive or confidential company information in the process. Sharing data is not always an issue in itself – companies often rely on third party tools and service providers with their information – but issues can occur when the tool in question and its data handling policies haven’t been assessed and approved by the business.

When this is the case, there’s no guarantee where company information will end up after it is fed into an ‘insecure’ AI tool. Often, company information will be used to train the model and help shape answers for other users, and could even become vulnerable in the case of a cyber attack or leak. In March, for example, OpenAI confirmed that a chatbot’s source code bug caused a data leak, with chat histories potentially exposed.

The second risk of Shadow AI is that, because companies are typically unaware these tools are being used, they’re unable to gauge the dangers and take steps to mitigate the risks. (This could also include employees sourcing and then using inaccurate information in their work.) By definition, this is something that happens in the shadows – out of business leaders’ sight. According to Gartner research, 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022. That number is expected to climb to 75% by 2027.

Shadow AI presents data and cyber security risks

And therein lies the crux of the problem. How can organisations monitor and assess the risks of something they don’t know about?

Some organisations, such as tech giant Samsung, have gone as far as  banning ChatGPT from its offices after employees uploaded proprietary source codes and leaked confidential company information through the public platform. And companies like Apple and JP Morgan have also limited employee use of ChatGPT. Others are burying their heads in the sand, or failing to spot the existence of the issue entirely.

What then, should business leaders be doing to combat the risks of Shadow AI, while simultaneously ensuring that they and their teams are able to benefit from the efficiencies and insights which artificial intelligence can offer?

Firstly, leaders should educate teams on what safe AI practice looks like, and the risks that come with Shadow AI, as well as providing clear guidance on when ChatGPT can and can’t be used safely at work.

For cases which fall into the latter camp, companies should consider offering staff private, in-house generative AI tools instead. Llama 2 and Falcon AI are both examples of models that can be downloaded and used securely to power generative AI tools. Azure Open AI offers an alternative halfway house, where data remains within the company’s Microsoft ‘tenancy’. These options avoid the risk to data and IP which comes with public Large Language Models like ChatGPT—whose different uses of our data aren’t yet known—while enabling employees to yield the benefits of generative AI.

Leaders must take control of the AI agenda in their organizations—and they must do so before staff do it for them. This way, business leaders can leverage generative AI in a way which alleviates pain points for employees, improves productivity and performance and, crucially, puts data protection above all else.


Steve Salvin is the founder and CEO of Aiimi.


https://www.fastcompany.com/90972657/what-managers-should-know-about-the-secrets-threat-of-employees-using-shadow-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 2y | 26. 10. 2023 9:50:04


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

WhatsApp removes 6.8 million accounts linked to scam centers

WhatsApp has taken down 6.8 million accounts that were “linked to criminal scam centers” target

6. 8. 2025 21:40:06 | Fast company - tech
Google wants you to be a citizen data scientist

For more than a decade, enterprise teams bought into the promise of business intelligence platforms delivering “decision-making at the speed of thought.” But most discovered the opposite: slow-mov

6. 8. 2025 19:30:04 | Fast company - tech
Apple to invest another $100 billion in the U.S.

President Donald Trump on Wednesday is expected to celebrate at the White House a commitment by

6. 8. 2025 19:30:03 | Fast company - tech
Character.AI launches social feed to let users interact, create, and share with AI personas

Character.AI is going social, adding an interactive feed to its mobile apps. 

Rolled out on Monday, the new social feed may initially look similar

6. 8. 2025 17:10:05 | Fast company - tech
Exclusive: Google Gemini adds AI tutoring, heating up the fight for student users

Just in time for the new school year, Google has introduced a tool called Guided Learning within its Gemini chatbot. Unlike tools that offer instant answers, Guided Learning breaks down complex pro

6. 8. 2025 17:10:04 | Fast company - tech
Pinterest’s male audience is booming. Here’s what they’re searching for

A growing number of men are flocking to Pinterest.

The company’s first-ever trend report reveals that male users now make up

6. 8. 2025 12:30:04 | Fast company - tech