Security analysts may balk at Microsoft’s latest ‘copilot.’ Here’s why.

Microsoft first brought generative AI to bear in search, then in its productivity apps, and now it bringing the new technology to its security practice with Security Copilot.

The new offering follows Microsoft’s general strategy of bringing an AI natural language assistant to its main user interfaces. But security may be a dangerous place to deploy AI technology that “hallucinates.”

The Security Copilot is powered by OpenAI’s GPT-4 large language model and Microsoft’s own security-focused model, which contains its proprietary knowledge about security threats. Microsoft says its security model intakes 65 trillion signals from the threat environment daily. The Security Copilot service runs within Microsoft’s Azure cloud.

A security pro might encounter a suspicious-looking signal within the company’s systems, then call on the assistant for help in analyzing it and communicating a potential threat. They can quickly call up support materials, URLs, or code snippets about past exploits and ongoing vulnerabilities, and feed it to the assistant, or request information about incidents and alerts from other security tools. Any new information or analysis generated is stored for future investigations.

Microsoft says the security assistant can learn as it encounteres more threat information, developing new skills. This, the company says, might help a security analyst detect threats faster and respond faster.

Microsoft says high up in its blog post that Security Copilot “doesn’t always get everything right” and that it can generate mistakes. As you might expect, dropping an unpredictable generative AI technology into an exacting environment of a security team could be problematic. Generative AI models are notorious for “hallucinating” and generating fiction in the guise of facts. When a security analyst is responding to a perceived threat such as a DDOS or ransomware attack, every second counts, and they might might not have time to sift through an AI-generated threat summary to see if it contains fictional information, says Gartner distinguished VP analyst Avivah Litan.

“I was just on the phone with a major security operator and they said they’re going to push back on using these products until they can be assured that the models are generating accurate information,” she says.

Litan adds that security pros may now need a new class of tools to police the accuracy of the content generated by tools like Security Copilot.

Microsoft says it built into the user interface of the Copilot a way for users to give feedback on the assistant’s responses, so that the company can continue working to make the tool more coherant and useful. But security environments may make bad sandboxes and security people may not have time to help Microsoft conduct R&D on its products. “Microsoft is just using the security domain to advance its plan to put generative AI into all its products,” Litan says.

Microsoft adds that the customer’s proprietary security knowledge base of security threats and responses remains with the customer and is not used to train the Microsoft AI models. The company says Copilot is also able to integrate with other Microsoft security products, and that in the future, it will connect with third-party security products, too.

As AI chatbots evolve, they will be given more access to the “ground truth” information contained in proprietary company databases and AI models. The large language models will likely be used to wrap this kind of data into an easily digestible natural language wrapper, but the LLM models will always defer to the proprietary knowledge bases for factual information. As long as they’re allowed to hallucinate within serious business applications their reliability and usefulness may be limited.

https://www.fastcompany.com/90872319/security-analysts-may-balk-at-microsofts-latest-copilot-heres-why?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 2y | 28.03.2023, 21:21:03


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

Plane yoga is going viral on EasyJet and Spirit Airlines

The last place you’d think of doing a downward dog? An airplane.

That might soon change, as plane yoga is apparently now a thing.

06.07.2025, 12:20:03 | Fast company - tech
How AI is transforming corporate finance

The role of the CFO is evolving—and fast. In today’s volatile business environment, finance leaders are navigating everything from unpredictable tariffs to tightening regulations and rising geopol

05.07.2025, 13:10:03 | Fast company - tech
Want to move data between Apple and Google Maps? Try this  workaround

In June, Google released its newest smartphone operating system, Android 16. The same month, Apple previewed its next smartphone oper

05.07.2025, 10:40:07 | Fast company - tech
Tally lets you design great free surveys in 60 seconds

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

04.07.2025, 13:50:03 | Fast company - tech
How China is leading the humanoid robots race

I’ve worked at the bleeding edge of robotics innovation in the United States for almost my entire professional life. Never before have I seen another country advance so quickly.

In

04.07.2025, 09:20:03 | Fast company - tech
‘There is nothing that Aquaphor will not fix’: The internet is in love with this no-frills skin ointment

Aquaphor has become this summer’s hottest accessory.

The no-frills beauty staple—once relegated to the bottom of your bag, the glove box, or a bedside drawer—is now dangling from

03.07.2025, 23:50:07 | Fast company - tech