Before generative AI, if you wanted an inexpensive way to build out lots of content, you launched a wiki. You’d spin up a site—broad or niche—and throw the doors open for anyone to edit. Be an early mover (like Wikipedia) or cultivate a loyal community (think any popular sci-fi or fantasy show), and before long, you’d have a vast trove of strategically useful pages.
The catch with wikis is that when you hand the reins to the crowd, keeping quality consistent becomes a serious challenge. In Wikipedia’s case, that’s meant taking stewardship to heart, relying on a small army of editors—mostly volunteers—to manage millions of community-driven pages.
Given how many people lean on Wikipedia daily, those editors wield a remarkable amount of influence. Recently, they reminded everyone just how much, vigorously pushing back on an internal experiment to add machine-generated summaries at the top of some articles, which was first reported by 404 Media. The backlash was swift enough that Wikipedia pulled the plug on the pilot just a day after it launched. It’s a textbook example of how not to roll out AI to a devoted and discerning editorial team.
The revolt against the machines
What’s fascinating about this particular stumble is that the AI didn’t actually get anything egregiously wrong. If you take a cursory look at this AI summary about dopamine, it appears to break down what it is quite well, and certainly in a less-dense manner than the page’s introduction. There are no outright hallucinations, like a made-up scientific paper or a recommendation to add dopamine to your pizza.
No, what triggered the reversal wasn’t faulty output—but an outright revolt from within. Wikipedia’s editorial process runs on a kind of radical openness: Reasons for edits and objections are usually aired in public. A peek at the discussion page for the test reveals a massive backlash—a pile-on that would make even a Twitter outrage mob blush.
Sure, the scale of the reaction might feel over the top, but the instinct behind it is easy to grasp. For people whose livelihoods revolve around words, AI’s creeping into their turf feels like an existential threat. That goes double for Wikipedia’s editors, who are notorious for debating even single syllables (just look at this rhetorical battle over the usage of “aluminum” vs. “aluminium”).
That said, it’s not as if the complaints of the Wikipedia editors were purely histrionic. Some of them pointed out the dopamine summary included phrasing that doesn’t align with Wikipedia style, using pronouns like “we” when the site broadly adheres to a more arm’s-length objective style. And a few words in the summary, like “emotion,” appear to be inferred by the AI rather than based on a strict summary of the facts.
Those are all worth addressing, but remember: This was a test. Wikipedia appears to have been very considered about the AI technology used, choosing an open-source Cohere model to maintain a high level of customization and control. It would have likely been straightforward to take the feedback from editors, use it to iterate on the prompting and tuning, then produce better summaries in Wikipedia style.
That obviously didn’t happen. Wikipedia’s editors reacted swiftly and harshly, and it’s fair to say the conversation was not constructive. Rather than trying to improve a product that readers were initially responding well to—for the short duration of the test, 75% of readers who clicked on the summary found it useful—the vast majority of editors seemed hell-bent on halting the project entirely. (A typical comment: “There is no amount of process or bureaucracy that can make this bad idea good.”)
Lessons for the media
Versions of this same drama are playing out across media as executives hunt for AI strategies that boost the bottom line without crushing newsroom morale. In a move with strong echoes of the Wikipedia debacle, Politico‘s union recently took legal action against the company for introducing unvetted AI-generated summaries based on the newsroom’s reporting. The whole industry is tense now that AI summary tools are starting to nibble away at search traffic, and layoffs—like the recent cuts at Business Insider—have journalist unions drawing battle lines to shield jobs from automation.
Yet, AI can be an invaluable asset for reporters, too. Investigations at The Associated Press, The Wall Street Journal, and other outlets have been able to tackle massive datasets with AI’s help. These tools can parse dense legal filings in record time, spark ideas as a brainstorming partner, or plow through endless pitches to spotlight the ones worth your attention.
For editors and product leads hoping to fold AI into their newsrooms, there are lessons to be gleaned from Wikipedia’s misstep. The main one: Don’t force an AI rollout from the top down. Sure, this was a test, but it was not that contained—the pages targeted for summaries weren’t confined to any clear test area.
The newsrooms getting this right—Reuters, The New York Times, The Washington Post—deploy AI thoughtfully and deliberately: team by team, sometimes even user by user, doing the hard work of winning people over before introducing new experiences. Of course, user-facing content isn’t the same as internal tools, but managers need to remember that journalists feel deeply invested in how their work appears. Rolling out a tool that changes what that looks like can’t be as simple as: “This is what we’re doing now.”
In journalism, how you introduce artificial intelligence is just as critical as what the AI does. Even the best system will spark resistance if it’s sprung without trust, transparency, and genuine respect for the craft. AI can be a powerful ally for newsrooms—if it’s brought in with care, buy-in, and a clear sense of partnership.
Melden Sie sich an, um einen Kommentar hinzuzufügen
Andere Beiträge in dieser Gruppe



Perched on a dusty high desert plain about 100 miles north of downtown Los Angeles, the Mojave Air and Space Port looks more like a final destination for aerospace experiments than a stepping ston

Banks are embracing the AI workforce—but some institutions are taking un

For those who’ve had enough of scrolling AI slop, meet Picastro: an In

The Republican Party’s 800-page One Big Beautiful Bill Act is now being debated i

Colombian gangs are using social media to reach and recruit children, the United Nations has warned.
Gangs and rebel groups are enticing children to enlist by posting videos on platforms