How AI is steering the media toward a ‘close enough’ standard

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much buzz, and even more money, circulating in the industry that it feels almost sacrilegious to doubt that AI will make good on its promises to change the world. Deep research can do 1% of all knowledge work! Soon the internet will be designed for agents! Infinite Ghibli!

And then you remember AI screws things up. All. The. Time.

Hallucinations—when a large language model essentially spits out information created out of whole cloth—have been an issue for generative AI since its inception. And they are doggedly persistent: Despite advances in model size and sophistication, serious errors still occur, even in so-called advanced reasoning or thinking models. Hallucinations appear to be inherent to generative technology, a by-product of AI’s seemingly magical quality of creating new content out of thin air. They’re both a feature and a bug at the same time.

In journalism, accuracy isn’t optional—and that’s exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times.

The intern that just doesn’t get it

AI is occasionally described as an incredibly productive intern, since it knows pretty much everything and has superhuman ability to create content. But if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path.

Bloomberg is hardly the first publication to run head-first into hallucinations. But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both. 

The typical approach thus far is to slap a disclaimer onto the content. The Washington Post’s Ask the Post AI is a good example, warning users that the feature is an “experiment” and encouraging users to “Please verify by consulting the provided articles.” Many other publications have similar disclaimers.

It’s a strange world where a media company introduces a new feature with a label that effectively says, “You can’t rely on this.” Providing accurate information isn’t a secondary feature of journalism—it’s the whole point. This contradiction is one of the strangest manifestations of the application of AI in media.

Moving to a “close enough” world

How did this happen? Arguably, media companies were forced into it. When ChatGPT and other large language models first began summarizing content, we were so blown away by their mastery of language that we weren’t as concerned about the fine print: “ChatGPT can make mistakes. Check important info.” And it turns out that for most users that was good enough. Even though generative AI often gets facts wrong, chatbots have seen explosive user growth. “Close enough” appears to be what the world is settling on. 

It’s not a standard anyone sought out, but the media is slowly adopting it as more publications launch generative experiences with similar disclaimers. There’s an “If you can’t beat ’em, join ’em” aspect to this, certainly: As more people turn to AI search engines and chatbots for information, media companies feel pressure to either sign licensing deals to have their content included, or match those AI experiences with their own chatbots. Accuracy? There’s a disclaimer for that. 

One notable holdout, however, is the BBC. So far, the BBC hasn’t signed any deals with AI companies, and it’s been a leader in pointing out the inaccuracies that AI portals create, publishing its own research on the topic earlier this year. It was also the BBC that ultimately convinced Apple to dial back its shoddy notification summaries on the iPhone, which were garbling news to the point of making up entirely false narratives.

In a world where it’s looking increasingly fashionable for media companies to take licensing money, the BBC is architecting a more proactive approach. Somewhere along the way—whether out of financial self-interest or falling into Big Tech’s reality distortion field—many media companies began to buy into the idea that hallucinations were either not that big a problem or something that will inevitably be solved. After all, “Today is the worst this technology will ever be.”

Think of pollution and coal plants. It’s an ugly side effect, but one that doesn’t stop the business from thriving. That’s how hallucinations function in AI: clearly flawed, occasionally harmful, yet tolerated—because the growth and money keep coming.

But those false outputs are deadly to an industry whose primary product is accurate information. Journalists should not sit back and expect Silicon Valley to simply solve hallucinations on its own, and the BBC is showing there’s a path to being part of the solution without evangelizing or ignoring the problem. After all, “Check important info” is supposed to be the media’s job.

https://www.fastcompany.com/91310978/ai-steers-the-media-toward-a-close-enough-standard?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvorené 3mo | 4. 4. 2025, 9:40:02


Ak chcete pridať komentár, prihláste sa

Ostatné príspevky v tejto skupine

How AI is transforming corporate finance

The role of the CFO is evolving—and fast. In today’s volatile business environment, finance leaders are navigating everything from unpredictable tariffs to tightening regulations and rising geopol

5. 7. 2025, 13:10:03 | Fast company - tech
Want to move data between Apple and Google Maps? Try this  workaround

In June, Google released its newest smartphone operating system, Android 16. The same month, Apple previewed its next smartphone oper

5. 7. 2025, 10:40:07 | Fast company - tech
Tally lets you design great free surveys in 60 seconds

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

4. 7. 2025, 13:50:03 | Fast company - tech
How China is leading the humanoid robots race

I’ve worked at the bleeding edge of robotics innovation in the United States for almost my entire professional life. Never before have I seen another country advance so quickly.

In

4. 7. 2025, 9:20:03 | Fast company - tech
‘There is nothing that Aquaphor will not fix’: The internet is in love with this no-frills skin ointment

Aquaphor has become this summer’s hottest accessory.

The no-frills beauty staple—once relegated to the bottom of your bag, the glove box, or a bedside drawer—is now dangling from

3. 7. 2025, 23:50:07 | Fast company - tech
Is Tesla screwed?

Elon Musk’s anger over the One Big Beautiful Bill Act was evident this week a

3. 7. 2025, 17:10:05 | Fast company - tech