Discord is implementing a more flexible approach to content moderation

Discord doesn’t want to count strikes when users run afoul of its rules.

As part of a slew of fall product updates, the online chat platform announced a more flexible approach to moderation. Instead of handing out strikes for every policy violation, Discord will tailor its warnings or punishments to fit the crime, while providing steps users can take to improve their standing.

“We think we’ve built the most nuanced, comprehensive, and proportionate warning system of any platform,” Savannah Badalich, Discord’s senior director of policy, told reporters.

Alongside the new warning system, Discord is also launching new safety features for teens: It will auto-blur potentially-sensitive images from teens’ friends by default, and it will show a “safety check” when teens are messaging with someone new, asking if they want to proceed and linking to additional safety tips.

In both cases, Discord wants to show that it’s taking safety seriously after years of controversy and criticism. A report by NBC News in May documented how child predators used the platform to groom and abduct teens, while other reports have found pockets of extremism to thrive there.

Discord likes to point out that more 15% of its employees work on trust and safety. As the company expands beyond its roots in gaming, it hopes to build a system that’s more effective at moderating itself.

The no-strike system

Discord’s moderation rules have always been a bit tricky to pin down, perhaps by design.

While individual servers can set their own rules, Discord itself has not laid out a specific number of strikes or infractions that lead to suspension across its platform. Users in turn had no way to know where they stood, even if Discord was quietly keeping a tally of their infractions.

[Photo: Discord]

The new system tries to be more transparent while still stopping short of a distinct strike count. When users violate a rule, they’ll get a detailed pop-up describing what they did wrong along with any temporary restrictions that might apply. They can then head to Discord’s privacy and safety menu to see how the violation affects their account standing and what they can do to improve it.

Discord says it will have four levels of account status—including “All Good,” “Limited,” “Very Limited,” and “At Risk”—before users reach a platform-wide suspension. Serious offenses such as violent extremism and sexualizing children are still grounds for an immediate ban, but otherwise Discord isn’t assigning scores to each violation or setting a specific number of violations for each level.

It’s a different approach than what some of its peers are doing. Facebook, for instance, has a 10-strike system with increasing penalties at each level, while Microsoft recently launched an eight-strike system for Xbox users, with some violations counting for more than one strike.

Ben Shanken, Discord’s vice president of product, says the company will treat each type of violation differently, but ultimately it wants to leave more room for subjectivity.

“If your friend is just trying to report a message to troll you a little bit, we don’t want that to result in your account getting banned,” he says. “We’ve built from the ground up to try and be more bespoke about it.”

Early warnings for teens

As for Discord’s new teen safety features, the company says it will use image recognition algorithms to detect and blur potentially sensitive images from friends, and will block those images in DMs from strangers. Teen can then click the image to reveal its contents or head to Discord’s settings to disable the feature. While image blurring will be on by default for teens, adults will have an option to enable it as well.

Meanwhile, Discord will begin sending safety alerts to teens when they get messages from strangers. The alerts will make sure the teen is sure they want to reply, and will include links with safety tips and instructions on how to block the user.

Discord says the new warnings are part of a broader initiative to make its platform safer for teens. In June, NBC News reported on dozens of kidnapping, grooming or sexual assault cases over the past six years in which communications allegedly happened on Discord. It also cited data from the National Center for Missing & Exploited Children showing that reports of child sexual abuse material on Discord increased by 474% from 2021 to 2022, with the group claiming slower average response times from Discord over that period.

Shanken says Discord started working on the new safety features about nine months ago, and that the company will build on those features over the coming year. The plan is to give teens more control over their communications, while also getting smarter at detecting potential safety issues and flagging them for users.

“We’d much rather have a teenager receive an alert and block a user than just send a report to us, and us having to go figure that out,” he says.

Like other big tech companies, Discord dreams of being able to use AI and automation to build self-moderating systems. But while other technology companies are making cuts to those moderation efforts, Shanken says that hasn’t been the case at Discord. He notes that the team working on safety is Discord’s second-largest technology group.

“It’s true that these parts of the business are pressured in tougher economic times, and that’s not been the case at Discord,” he says. “We’ve only continued to expand our investment over the past couple of years.”

https://www.fastcompany.com/90969656/discord-is-implementing-a-flexible-approach-to-content-moderation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 19 oct. 2023, 15:30:09


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

Going ‘AI-first’ appears to be backfiring on Klarna and Duolingo

Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.

Klarna and Duloingo

12 mai 2025, 20:20:01 | Fast company - tech
Lyft CEO David Risher on competing with Uber and the future of rideshare

The rideshare market has reached a crossroads. Autonomous vehicles are on the rise, driver unrest is mounting, and customers are questioning everything from pricing to trust and safety. In the mid

12 mai 2025, 17:50:04 | Fast company - tech
Tech billionaires’ plan for a new California city may bypass voter approval

A group backed by tech billionaires spent years and $800 million secretly buying up over 60,

12 mai 2025, 13:20:04 | Fast company - tech
Snapchat’s Snap Map reaches 400 million users

Move aside, Google Maps: Snapchat’s Snap Map has hit a major milestone with 400 million monthly active users.

Launched in 2017, Snap Map began as a GPS-based feature that allowed users t

12 mai 2025, 13:20:03 | Fast company - tech
How Yahoo built AI-driven content discovery into its revamped news app

In April 2024, Yahoo acquired Artifact, a tool that uses AI to recommend news to readers. Yahoo folded Artifact’s—which was cofounded by Instagram cofounders Mike Krieger and Kevin Systrom—into it

12 mai 2025, 10:50:05 | Fast company - tech
How AI is changing your doctors appointments

It is hard to believe that in 2025, we are still dialing to schedule doctor appointments, get referrals, refill prescriptions, confirm office hours and addresses, and handle many other healthcare

12 mai 2025, 10:50:04 | Fast company - tech