Security analysts may balk at Microsoft’s latest ‘copilot.’ Here’s why.

Microsoft first brought generative AI to bear in search, then in its productivity apps, and now it bringing the new technology to its security practice with Security Copilot.

The new offering follows Microsoft’s general strategy of bringing an AI natural language assistant to its main user interfaces. But security may be a dangerous place to deploy AI technology that “hallucinates.”

The Security Copilot is powered by OpenAI’s GPT-4 large language model and Microsoft’s own security-focused model, which contains its proprietary knowledge about security threats. Microsoft says its security model intakes 65 trillion signals from the threat environment daily. The Security Copilot service runs within Microsoft’s Azure cloud.

A security pro might encounter a suspicious-looking signal within the company’s systems, then call on the assistant for help in analyzing it and communicating a potential threat. They can quickly call up support materials, URLs, or code snippets about past exploits and ongoing vulnerabilities, and feed it to the assistant, or request information about incidents and alerts from other security tools. Any new information or analysis generated is stored for future investigations.

Microsoft says the security assistant can learn as it encounteres more threat information, developing new skills. This, the company says, might help a security analyst detect threats faster and respond faster.

Microsoft says high up in its blog post that Security Copilot “doesn’t always get everything right” and that it can generate mistakes. As you might expect, dropping an unpredictable generative AI technology into an exacting environment of a security team could be problematic. Generative AI models are notorious for “hallucinating” and generating fiction in the guise of facts. When a security analyst is responding to a perceived threat such as a DDOS or ransomware attack, every second counts, and they might might not have time to sift through an AI-generated threat summary to see if it contains fictional information, says Gartner distinguished VP analyst Avivah Litan.

“I was just on the phone with a major security operator and they said they’re going to push back on using these products until they can be assured that the models are generating accurate information,” she says.

Litan adds that security pros may now need a new class of tools to police the accuracy of the content generated by tools like Security Copilot.

Microsoft says it built into the user interface of the Copilot a way for users to give feedback on the assistant’s responses, so that the company can continue working to make the tool more coherant and useful. But security environments may make bad sandboxes and security people may not have time to help Microsoft conduct R&D on its products. “Microsoft is just using the security domain to advance its plan to put generative AI into all its products,” Litan says.

Microsoft adds that the customer’s proprietary security knowledge base of security threats and responses remains with the customer and is not used to train the Microsoft AI models. The company says Copilot is also able to integrate with other Microsoft security products, and that in the future, it will connect with third-party security products, too.

As AI chatbots evolve, they will be given more access to the “ground truth” information contained in proprietary company databases and AI models. The large language models will likely be used to wrap this kind of data into an easily digestible natural language wrapper, but the LLM models will always defer to the proprietary knowledge bases for factual information. As long as they’re allowed to hallucinate within serious business applications their reliability and usefulness may be limited.

https://www.fastcompany.com/90872319/security-analysts-may-balk-at-microsofts-latest-copilot-heres-why?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Mar 28, 2023, 9:21:03 PM


Login to add comment

Other posts in this group

How China is leading the humanoid robots race

I’ve worked at the bleeding edge of robotics innovation in the United States for almost my entire professional life. Never before have I seen another country advance so quickly.

In

Jul 4, 2025, 9:20:03 AM | Fast company - tech
‘There is nothing that Aquaphor will not fix’: The internet is in love with this no-frills skin ointment

Aquaphor has become this summer’s hottest accessory.

The no-frills beauty staple—once relegated to the bottom of your bag, the glove box, or a bedside drawer—is now dangling from

Jul 3, 2025, 11:50:07 PM | Fast company - tech
Is Tesla screwed?

Elon Musk’s anger over the One Big Beautiful Bill Act was evident this week a

Jul 3, 2025, 5:10:05 PM | Fast company - tech
The fight over who gets to regulate AI is far from over

Welcome to AI DecodedFast Company’s weekly new

Jul 3, 2025, 5:10:03 PM | Fast company - tech
Agentic AI is driving a complete rethink of compute infrastructure

When artificial intelligence first gained traction in the early 2010s,

Jul 3, 2025, 12:30:02 PM | Fast company - tech
How your data is collected and what you can do about it

You wake up in the morning and, first thing, you open your weather app. You close that pesky ad that opens first and check the forecast. You like your weather app, which shows hourly weather forec

Jul 3, 2025, 10:10:05 AM | Fast company - tech