How AI is steering the media toward a ‘close enough’ standard

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much buzz, and even more money, circulating in the industry that it feels almost sacrilegious to doubt that AI will make good on its promises to change the world. Deep research can do 1% of all knowledge work! Soon the internet will be designed for agents! Infinite Ghibli!

And then you remember AI screws things up. All. The. Time.

Hallucinations—when a large language model essentially spits out information created out of whole cloth—have been an issue for generative AI since its inception. And they are doggedly persistent: Despite advances in model size and sophistication, serious errors still occur, even in so-called advanced reasoning or thinking models. Hallucinations appear to be inherent to generative technology, a by-product of AI’s seemingly magical quality of creating new content out of thin air. They’re both a feature and a bug at the same time.

In journalism, accuracy isn’t optional—and that’s exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times.

The intern that just doesn’t get it

AI is occasionally described as an incredibly productive intern, since it knows pretty much everything and has superhuman ability to create content. But if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path.

Bloomberg is hardly the first publication to run head-first into hallucinations. But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both. 

The typical approach thus far is to slap a disclaimer onto the content. The Washington Post’s Ask the Post AI is a good example, warning users that the feature is an “experiment” and encouraging users to “Please verify by consulting the provided articles.” Many other publications have similar disclaimers.

It’s a strange world where a media company introduces a new feature with a label that effectively says, “You can’t rely on this.” Providing accurate information isn’t a secondary feature of journalism—it’s the whole point. This contradiction is one of the strangest manifestations of the application of AI in media.

Moving to a “close enough” world

How did this happen? Arguably, media companies were forced into it. When ChatGPT and other large language models first began summarizing content, we were so blown away by their mastery of language that we weren’t as concerned about the fine print: “ChatGPT can make mistakes. Check important info.” And it turns out that for most users that was good enough. Even though generative AI often gets facts wrong, chatbots have seen explosive user growth. “Close enough” appears to be what the world is settling on. 

It’s not a standard anyone sought out, but the media is slowly adopting it as more publications launch generative experiences with similar disclaimers. There’s an “If you can’t beat ’em, join ’em” aspect to this, certainly: As more people turn to AI search engines and chatbots for information, media companies feel pressure to either sign licensing deals to have their content included, or match those AI experiences with their own chatbots. Accuracy? There’s a disclaimer for that. 

One notable holdout, however, is the BBC. So far, the BBC hasn’t signed any deals with AI companies, and it’s been a leader in pointing out the inaccuracies that AI portals create, publishing its own research on the topic earlier this year. It was also the BBC that ultimately convinced Apple to dial back its shoddy notification summaries on the iPhone, which were garbling news to the point of making up entirely false narratives.

In a world where it’s looking increasingly fashionable for media companies to take licensing money, the BBC is architecting a more proactive approach. Somewhere along the way—whether out of financial self-interest or falling into Big Tech’s reality distortion field—many media companies began to buy into the idea that hallucinations were either not that big a problem or something that will inevitably be solved. After all, “Today is the worst this technology will ever be.”

Think of pollution and coal plants. It’s an ugly side effect, but one that doesn’t stop the business from thriving. That’s how hallucinations function in AI: clearly flawed, occasionally harmful, yet tolerated—because the growth and money keep coming.

But those false outputs are deadly to an industry whose primary product is accurate information. Journalists should not sit back and expect Silicon Valley to simply solve hallucinations on its own, and the BBC is showing there’s a path to being part of the solution without evangelizing or ignoring the problem. After all, “Check important info” is supposed to be the media’s job.

https://www.fastcompany.com/91310978/ai-steers-the-media-toward-a-close-enough-standard?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 5mo | 4 апр. 2025 г., 09:40:02


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

Kalshi found a backdoor to sports gambling, and is throwing it open to everyone

Last month, the online prediction market Kalshi filed some very dry but potentially very lucrative paperwork with t

6 сент. 2025 г., 12:50:03 | Fast company - tech
A slimmer iPhone and new Apple Watches: What to expect from Apple’s September 9 launch event

Apple holds several events throughout the year, but none is as vital to the company’s bottom line as its annual one in September. That’s when Apple unveils its new iPhone lineup, drawing our atten

6 сент. 2025 г., 10:30:04 | Fast company - tech
From Kindle to Kobo and beyond, this free ebook depot will blow your mind

The first time I read The Count of Monte Cristo, I was astounded by how freakin’ cool it all was. Here’s a story about daring prison escapes, finding hidden treasure, and elaborately exec

6 сент. 2025 г., 10:30:04 | Fast company - tech
TikTok is obsessed with this guy who bought an abandoned golf course in Maine

Buying an abandoned golf course and restoring it from scratch sounds like a dream for many golf fans. For one man in Maine, that dream is now reality.

A user who posts under the handle @

5 сент. 2025 г., 22:50:05 | Fast company - tech
Andreessen Horowitz is not a venture capital fund

I was reading funding news last week, and I came to a big realization: Andreessen Horowitz is not a venture capital fund.

A lot of people are thinking it. So there, I said it.

5 сент. 2025 г., 20:30:11 | Fast company - tech
Fake Holocaust AI slop is flooding social media

A post circulating on Facebook shows a man named Henek, a violinist allegedly forced to play in the concentration camp’s orchestra at Auschwitz. “His role: to play music as fellow prisoners

5 сент. 2025 г., 20:30:09 | Fast company - tech
Think this AI-generated Italian teacup on your kid’s phone is nonsense? That’s the point

In the first half of 2025, she racked up over 55 million views on TikTok and 4 mil

5 сент. 2025 г., 20:30:08 | Fast company - tech