Nobody’s talking about this hidden threat in generative AI

Companies today are grappling with a monumental challenge: the relentless accumulation of data. According to the latest estimates, 328.77 million terabytes of data are created each day. Around 120 zettabytes of data will be generated this year, an almost incomprehensible amount.

Every sector, from industry giants to small businesses, confronts the daunting task of managing this deluge of text, audio, and video content, to name a few formats.

Managing internal and external data helps companies glean market insights, drive innovation and, importantly, protect against business risk. For instance, it allows them to monitor brand conversations to stay ahead of negative sentiment, whether directly from customers or indirectly from partners. Concerns about brand safety and suitability is a serious enough problem that marketers, media agencies, and their respective industry associations created the Global Alliance for Responsible Media (GARM) to tackle the issue.

Discovering, and monitoring, brand mentions for suitability has recently become one of the primary use cases for AI technology. With data creation accelerating at an unprecedented pace and showing no signs of slowing down, more and more companies are leaning on AI to detect brand suitability red flags and ultimately prevent reputational risk.

While companies in all industries face the challenge of managing this ever-increasing amount of data, the level of potential risk varies. It’s one thing to use technology to glean that customers are making fun of an advertising campaign, but quite another to pick up a conversation around product safety, or to discover that the host of the podcast you’ve partnered with to promote your brand is openly sharing ideas that go against company values.

Most companies want to avoid these problems for the fear of losing customers they have spent so much time and money to acquire. But for highly-regulated industries in the U.S. such as financial services, insurance, and pharmaceuticals, unsavory brand impressions can have even more devastating and long-lasting effects on a company’s reputation and bottom line—and can lead to prolonged regulatory scrutiny.

That helps to explain why only a few months ago biopharmaceutical company Gilead Sciences and New York University Langone Hospital immediately took action to suspend advertising on X when the nonprofit Media Matters for America flagged that their ads were appearing next to content celebrating Hitler and the Nazi party. Financial services company RBS took similar action in 2017 when The Times found its ads also appeared next to extremist content.

In addition to these examples of inappropriate placement on high-profile social media networks and search sites, the vastly growing set of user-generated content makes it simply unfeasible for humans to manually review each piece of content an ad may land in. AI is often touted as the hero that can quickly sift through oceans of information, identifying patterns, trends, and anomalies that might otherwise go unnoticed.

Within a programmatic buying environment in which transactions and placements happen in milliseconds, AI is required to ensure brands can manage reputational risk on a daily basis. There are dozens of companies that analyze a variety of content types (display, CTV, social, audio, and so on), and each is looking for a different set of features that may be relevant to advertisers. Companies including DoubleVerify, IAS, Barometer, Channel Factory, and Zefr specifically measure for the GARM Brand Safety and Suitability framework across content types so advertisers can successfully target content that meets their brand guidelines and ensure their standards are being upheld throughout the campaign. Here, AI is the main reason large brands can comfortably buy programmatically at scale in mature channels like display, and with growing confidence in emerging channels like social and digital audio.

However, just as AI has the potential to help companies prevent risk, generative AI, its most recent and considered most transformative form, creates machine-generated data that actually contributes to brand suitability and reputational risk. Therein lies the paradox.

The secret sauce of generative AI is web scraping—the collection of data from unknown, decentralized, internet sources. In its current state, the machine-generated content created by generative AI falls short of being a dependable item of verified data for those seeking a source of truth. It has almost no data quality control, posing immense risks when it comes to brand reputation.

One out of many open lawsuits around generative AI even claims OpenAI’s ChatGPT and DALL·E collect people’s personal data from across the internet in violation of privacy laws. As data curveballs, like deepfakes—the manipulation of facial appearances through deep generative methods—pile up, it will be trickier and tricker to understand what natural language prompts an AI was fed, and therefore nearly impossible to get to the root of the reputational risk attached to it—a brand nightmare.

How will companies manage the paradox of AI as it relates to reputational risk? We don’t yet have industry standards or regulation around the engineering of AI prompts, or web scraping for AI use, though such regulation nears.

Today, brands already leverage AI to ideate, draft sketches or summaries, and assist with other tasks. However, for the majority of use cases, an additional AI or manual pass should be taken afterward to ensure alignment with brand standards.

In the future, if generative AI environments are to become ad-supported, this will be predicated on the availability of brand safety and contextual targeting tools akin to those available in existing channels. In the meantime, the best thing for all parties to do is to start testing and getting familiar with new AI approaches for risk management.

As with any groundbreaking technology, AI doesn’t just solve problems—it also creates them. Reputational risk takes many forms, and, while it is a concern for all companies, those operating in highly-regulated, trust-based industries face even more serious consequences if they are unable to manage it.

Given AI’s dual role as a hero and villain when it comes to reputational risk, businesses should develop a brand management strategy taking into account both factors. Doing so as soon as possible is key to keeping up with the explosion in data and sustainable enterprise risk management.

Anna Garcia is the founder and general partner of Altari Ventures.

https://www.fastcompany.com/90983147/hidden-threat-generative-ai-brand-reputation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 2y | 17 nov 2023, 13:40:05


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

How I took control of my email address with a custom domain

Over the past three years, I’ve changed email providers three times without ever changing email addresses.

That’s because my address is entirely under my control. Instead of relying on a

1 sept 2025, 14:30:04 | Fast company - tech
This viral grocery hack will help you save money and reduce waste

If you dread the weekly grocery shop, or get sidetracked by fun snacks only to end up with no real meals, this might be the hack for you.

The 5-4-3-2-1 method gives shoppers like you a s

31 ago 2025, 13:10:02 | Fast company - tech
Do Trump’s tariffs mean you’ll pay more for the iPhone 17 next month?

If 2025 is the year of anything, it is the year of the tariff. Ever since President Trump unleashed his

30 ago 2025, 11:30:07 | Fast company - tech
This simple free service makes sharing PDFs painless

Look, I’m not gonna lie to ya’: I’ve got a bit of a love-hate relationship with PDFs. And, more often than not, it veers mostly toward the “hate” side of that spectrum.

Don’t get m

30 ago 2025, 11:30:04 | Fast company - tech
Palantir is mapping government data. What it means for governance

When the U.S. government signs contracts with private technology companies, the fine print rarely reaches the public. Palantir Technologies, however, has at

30 ago 2025, 9:10:09 | Fast company - tech
‘The New York Times’ paywalled the Mini Crossword and the internet is in shambles

Bad news for morning routines everywhere: The New York Times has put its Mini Crossword behind a paywall.

On Tuesday, instead of their usual puzzle, players were met with a paywall. The

29 ago 2025, 19:20:05 | Fast company - tech
Chinese tech giant Alibaba aims to fill Nvidia void with its new AI chip

China’s Alibaba has developed a new chip that is more versatile than its older chips and is meant to serve a broader range of

29 ago 2025, 16:50:06 | Fast company - tech