Nobody’s talking about this hidden threat in generative AI

Companies today are grappling with a monumental challenge: the relentless accumulation of data. According to the latest estimates, 328.77 million terabytes of data are created each day. Around 120 zettabytes of data will be generated this year, an almost incomprehensible amount.

Every sector, from industry giants to small businesses, confronts the daunting task of managing this deluge of text, audio, and video content, to name a few formats.

Managing internal and external data helps companies glean market insights, drive innovation and, importantly, protect against business risk. For instance, it allows them to monitor brand conversations to stay ahead of negative sentiment, whether directly from customers or indirectly from partners. Concerns about brand safety and suitability is a serious enough problem that marketers, media agencies, and their respective industry associations created the Global Alliance for Responsible Media (GARM) to tackle the issue.

Discovering, and monitoring, brand mentions for suitability has recently become one of the primary use cases for AI technology. With data creation accelerating at an unprecedented pace and showing no signs of slowing down, more and more companies are leaning on AI to detect brand suitability red flags and ultimately prevent reputational risk.

While companies in all industries face the challenge of managing this ever-increasing amount of data, the level of potential risk varies. It’s one thing to use technology to glean that customers are making fun of an advertising campaign, but quite another to pick up a conversation around product safety, or to discover that the host of the podcast you’ve partnered with to promote your brand is openly sharing ideas that go against company values.

Most companies want to avoid these problems for the fear of losing customers they have spent so much time and money to acquire. But for highly-regulated industries in the U.S. such as financial services, insurance, and pharmaceuticals, unsavory brand impressions can have even more devastating and long-lasting effects on a company’s reputation and bottom line—and can lead to prolonged regulatory scrutiny.

That helps to explain why only a few months ago biopharmaceutical company Gilead Sciences and New York University Langone Hospital immediately took action to suspend advertising on X when the nonprofit Media Matters for America flagged that their ads were appearing next to content celebrating Hitler and the Nazi party. Financial services company RBS took similar action in 2017 when The Times found its ads also appeared next to extremist content.

In addition to these examples of inappropriate placement on high-profile social media networks and search sites, the vastly growing set of user-generated content makes it simply unfeasible for humans to manually review each piece of content an ad may land in. AI is often touted as the hero that can quickly sift through oceans of information, identifying patterns, trends, and anomalies that might otherwise go unnoticed.

Within a programmatic buying environment in which transactions and placements happen in milliseconds, AI is required to ensure brands can manage reputational risk on a daily basis. There are dozens of companies that analyze a variety of content types (display, CTV, social, audio, and so on), and each is looking for a different set of features that may be relevant to advertisers. Companies including DoubleVerify, IAS, Barometer, Channel Factory, and Zefr specifically measure for the GARM Brand Safety and Suitability framework across content types so advertisers can successfully target content that meets their brand guidelines and ensure their standards are being upheld throughout the campaign. Here, AI is the main reason large brands can comfortably buy programmatically at scale in mature channels like display, and with growing confidence in emerging channels like social and digital audio.

However, just as AI has the potential to help companies prevent risk, generative AI, its most recent and considered most transformative form, creates machine-generated data that actually contributes to brand suitability and reputational risk. Therein lies the paradox.

The secret sauce of generative AI is web scraping—the collection of data from unknown, decentralized, internet sources. In its current state, the machine-generated content created by generative AI falls short of being a dependable item of verified data for those seeking a source of truth. It has almost no data quality control, posing immense risks when it comes to brand reputation.

One out of many open lawsuits around generative AI even claims OpenAI’s ChatGPT and DALL·E collect people’s personal data from across the internet in violation of privacy laws. As data curveballs, like deepfakes—the manipulation of facial appearances through deep generative methods—pile up, it will be trickier and tricker to understand what natural language prompts an AI was fed, and therefore nearly impossible to get to the root of the reputational risk attached to it—a brand nightmare.

How will companies manage the paradox of AI as it relates to reputational risk? We don’t yet have industry standards or regulation around the engineering of AI prompts, or web scraping for AI use, though such regulation nears.

Today, brands already leverage AI to ideate, draft sketches or summaries, and assist with other tasks. However, for the majority of use cases, an additional AI or manual pass should be taken afterward to ensure alignment with brand standards.

In the future, if generative AI environments are to become ad-supported, this will be predicated on the availability of brand safety and contextual targeting tools akin to those available in existing channels. In the meantime, the best thing for all parties to do is to start testing and getting familiar with new AI approaches for risk management.

As with any groundbreaking technology, AI doesn’t just solve problems—it also creates them. Reputational risk takes many forms, and, while it is a concern for all companies, those operating in highly-regulated, trust-based industries face even more serious consequences if they are unable to manage it.

Given AI’s dual role as a hero and villain when it comes to reputational risk, businesses should develop a brand management strategy taking into account both factors. Doing so as soon as possible is key to keeping up with the explosion in data and sustainable enterprise risk management.

Anna Garcia is the founder and general partner of Altari Ventures.

https://www.fastcompany.com/90983147/hidden-threat-generative-ai-brand-reputation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2y | 17 nov. 2023, 13:40:05


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Windows 95’s look and feel are more impressive than ever

Every so often, Microsoft design director Diego Baca boots up an old computer so he can play around with Windows 95 again.

Baca has made a hobby of assembling old PCs with new-in-box vin

16 juil. 2025, 06:30:02 | Fast company - tech
Jack Dorsey’s new Sun Day app tells you exactly how long to tan before you burn

Twitter cofounder Jack Dorsey is back with a new app that tracks sun exposure and vitamin D levels.

Sun Day uses location-based data to show the current UV index, the day’s high, and add

15 juil. 2025, 21:10:06 | Fast company - tech
The CEO of Ciena on how AI is fueling a global subsea cable boom

Under the ocean’s surface lies the true backbone of the internet: an estimated

15 juil. 2025, 18:50:04 | Fast company - tech
AI therapy chatbots are unsafe and stigmatizing, a new Stanford study finds

AI chatbot therapists have made plenty of headlines in recent months—s

15 juil. 2025, 18:50:03 | Fast company - tech
Elon Musk’s chatbot Grok searches for his views before answering questions

The latest version of Elon Musk’s artificial intelligence chatbot Grok is echoing the views of its

15 juil. 2025, 16:30:06 | Fast company - tech
How this Florida county is using new 911 technology to save lives

When an emergency happens in Collier County, Florida, the

15 juil. 2025, 16:30:05 | Fast company - tech
How a ‘Shark Tank’-winning neuroscientist invented the bionic hand that stole the show at Comic-Con

A gleaming Belle from Beauty and the Beast glided along the exhibition floor at last year’s San Diego Comic-Con adorned in a yellow corseted gown with cascading satin folds. She could bare

15 juil. 2025, 14:20:03 | Fast company - tech