How AI could sabotage the next generation of journalists

A question I often get when I train editorial teams on the use of AI is, “Is using AI cheating?”

Although it’s a yes or no question, it’s obviously not a yes or no answer. The short answer is sometimes, but the key to figuring out the long answer is using the tools with an open mind. If you’re a professional in a field like journalism, you’ll generally be able to tell when it’s speeding up drudgery and when your judgment and expertise are most needed.

However, the recent viral story in New York magazine about how colleges and universities are struggling with rampant, unauthorized AI use from students got me thinking about what’s happening much earlier in the pipeline. After all, those college students who are using AI to cheat on essays and admissions interviews eventually get jobs in the workforce. How will entry-level reporters, editors, and interns regard the use of AI, and how can newsrooms guide them so they develop the critical skills good journalists need?

Prioritize people, not just output

This highlights an area of AI policymaking that often gets the short shrift. Newsroom AI policies are rightly concerned with the integrity of the information the publication is putting out and transparency with audiences, primarily. What AI might be doing to the skill-building of junior staffers is a tertiary concern, at best. Left unchecked, however, this problem has the potential to be existential: How do you produce competent senior staff when the junior staff is either replaced by AI or—as the New York piece suggests—replacing themselves with AI.

You start with the first principles. Most AI policies begin with some kind of affirmation that humans remain at the center of what journalism is about. That lens needs to turn inward in a real way, with a commitment to balance innovation and efficiencies with professional development. In a newsroom, a healthy AI policy also ensures staff in entry-level or junior roles have opportunities to build core journalistic competencies.

The policy should be clear to those workers even before they walk in the door. These days a lot of interviews happen over video conference, and many newsrooms that aren’t explicitly local have gone fully remote over the last few years. The fact is, if a candidate is on the other end of a video interview, hiring managers should be assuming they have some kind of AI helping them, even if there aren’t telltale signs like delayed answers and rote wording.

And there are still ways to adapt the hiring process to this reality. Where possible, newsrooms should incorporate in-person interviews and testing. For remote workers, real-time teamwork exercises will reveal a lot more than “take home” ones like memos and writing tests.

Why junior staff need their “reps”

A good AI policy spells out exactly which tasks are allowed to be partially or totally done by AI, while still leaving room to experiment in noncritical areas. (The New York Times’s policy is a good example.) In selecting those tasks, however, efficiency and productivity shouldn’t be the only factors. How that mix of tasks changes between junior and senior staff should be taken into account.

A good way to think about this: Just because AI can do a task doesn’t mean it should do it always, in every instance. Yes, an AI tool can now competently turn a three-hour school board meeting into a news story, but reporting and writing “rote” stories like this are a fundamental part of learning the ropes of journalism: taking good notes, finding the story in a sea of information, checking facts, and getting the right quotes. Newsrooms need to ensure this kind of foundational exercise, essentially “getting your reps in,” is still a priority for reporters just starting out.

This approach runs the risk of emphasizing newsroom hierarchy and increasing frustration among junior staffers who know that AI could speed up their work. That’s why it’s important to have a clear path out. For instance, new hires might need to complete training modules that emphasize foundational journalistic skills before they gain broader access to AI tools. That would send the message that using AI is a privilege—one earned through demonstrating competence.

How to future-proof journalism

So, using AI might be cheating in some cases and not cheating in others—even for the same task. That might be confusing, but it also might be a sign of a thoughtful AI policy that doesn’t see increased output as the be-all and end-all of success.

Because in the end, an AI policy isn’t just a rule book that allows or forbids offloading certain tasks to robots in the name of efficiency. It should be a map for how a newsroom preserves the integrity of its journalism and the trust of its audience as it navigates one of the most impactful technological changes in history. If you try to sail into the future without thinking about the long-term health of your staff, you risk arriving at the destination with a crew of nothing but robots.


https://www.fastcompany.com/91335432/ai-could-sabotage-next-generation-journalists?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 29d | 16.05.2025, 10:10:02


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

TikTok users are exposing their worst exes—all to the soundtrack of Lorde’s new single

The latest TikTok trend has people exposing their terrible exes and most toxic relationship stories to Lorde’s new single “ ">Man of the Ye

13.06.2025, 21:50:03 | Fast company - tech
Trump’s ‘gold card’ visa scheme is pure gilded nonsense

President Donald Trump announced, back on February 25, that his administration would soon debut a “gold card,” an immigration program that would allow wealthy foreigners, for the low, low price of

13.06.2025, 19:30:05 | Fast company - tech
How singles are using AI to improve their online dating success

Singles are increasingly turning to AI to boost their odds in the dating world.

According to a new study, just over a quarter (26%) of singles are using artificial intelligence to enhanc

13.06.2025, 19:30:04 | Fast company - tech
Can AI fact-check its own lies?

As AI car crashes go, the recent publishing of a

13.06.2025, 17:10:07 | Fast company - tech
‘Guys, the sea literally opened up’: AI-generated Bible characters are taking over TikTok

The Bible is now on TikTok, vlog-style.

Picture David—yes, that David, of Goliath fame—with an iPhone and influencer energy. “Asked the guy to film it, but guess what? The camera froze,”

13.06.2025, 14:50:05 | Fast company - tech
I write novels and build AI. The real story is more complicated than either side admits

“In three years,” a fellow tech executive recently told me with serene confidence, “Everyone will be able to make a full-length movie in AI, totally personalized for them, by just typing up a few

13.06.2025, 14:50:04 | Fast company - tech
This 1999 email from a tech pioneer helped me think about Apple’s WWDC

In 1999, I got to work on a literally once-in-a-lifetime project. As the 20th century was wrapping up, the magazine where I worked declared the personal computer the most important invention of th

13.06.2025, 14:50:03 | Fast company - tech