Discord is implementing a more flexible approach to content moderation

Discord doesn’t want to count strikes when users run afoul of its rules.

As part of a slew of fall product updates, the online chat platform announced a more flexible approach to moderation. Instead of handing out strikes for every policy violation, Discord will tailor its warnings or punishments to fit the crime, while providing steps users can take to improve their standing.

“We think we’ve built the most nuanced, comprehensive, and proportionate warning system of any platform,” Savannah Badalich, Discord’s senior director of policy, told reporters.

Alongside the new warning system, Discord is also launching new safety features for teens: It will auto-blur potentially-sensitive images from teens’ friends by default, and it will show a “safety check” when teens are messaging with someone new, asking if they want to proceed and linking to additional safety tips.

In both cases, Discord wants to show that it’s taking safety seriously after years of controversy and criticism. A report by NBC News in May documented how child predators used the platform to groom and abduct teens, while other reports have found pockets of extremism to thrive there.

Discord likes to point out that more 15% of its employees work on trust and safety. As the company expands beyond its roots in gaming, it hopes to build a system that’s more effective at moderating itself.

The no-strike system

Discord’s moderation rules have always been a bit tricky to pin down, perhaps by design.

While individual servers can set their own rules, Discord itself has not laid out a specific number of strikes or infractions that lead to suspension across its platform. Users in turn had no way to know where they stood, even if Discord was quietly keeping a tally of their infractions.

[Photo: Discord]

The new system tries to be more transparent while still stopping short of a distinct strike count. When users violate a rule, they’ll get a detailed pop-up describing what they did wrong along with any temporary restrictions that might apply. They can then head to Discord’s privacy and safety menu to see how the violation affects their account standing and what they can do to improve it.

Discord says it will have four levels of account status—including “All Good,” “Limited,” “Very Limited,” and “At Risk”—before users reach a platform-wide suspension. Serious offenses such as violent extremism and sexualizing children are still grounds for an immediate ban, but otherwise Discord isn’t assigning scores to each violation or setting a specific number of violations for each level.

It’s a different approach than what some of its peers are doing. Facebook, for instance, has a 10-strike system with increasing penalties at each level, while Microsoft recently launched an eight-strike system for Xbox users, with some violations counting for more than one strike.

Ben Shanken, Discord’s vice president of product, says the company will treat each type of violation differently, but ultimately it wants to leave more room for subjectivity.

“If your friend is just trying to report a message to troll you a little bit, we don’t want that to result in your account getting banned,” he says. “We’ve built from the ground up to try and be more bespoke about it.”

Early warnings for teens

As for Discord’s new teen safety features, the company says it will use image recognition algorithms to detect and blur potentially sensitive images from friends, and will block those images in DMs from strangers. Teen can then click the image to reveal its contents or head to Discord’s settings to disable the feature. While image blurring will be on by default for teens, adults will have an option to enable it as well.

Meanwhile, Discord will begin sending safety alerts to teens when they get messages from strangers. The alerts will make sure the teen is sure they want to reply, and will include links with safety tips and instructions on how to block the user.

Discord says the new warnings are part of a broader initiative to make its platform safer for teens. In June, NBC News reported on dozens of kidnapping, grooming or sexual assault cases over the past six years in which communications allegedly happened on Discord. It also cited data from the National Center for Missing & Exploited Children showing that reports of child sexual abuse material on Discord increased by 474% from 2021 to 2022, with the group claiming slower average response times from Discord over that period.

Shanken says Discord started working on the new safety features about nine months ago, and that the company will build on those features over the coming year. The plan is to give teens more control over their communications, while also getting smarter at detecting potential safety issues and flagging them for users.

“We’d much rather have a teenager receive an alert and block a user than just send a report to us, and us having to go figure that out,” he says.

Like other big tech companies, Discord dreams of being able to use AI and automation to build self-moderating systems. But while other technology companies are making cuts to those moderation efforts, Shanken says that hasn’t been the case at Discord. He notes that the team working on safety is Discord’s second-largest technology group.

“It’s true that these parts of the business are pressured in tougher economic times, and that’s not been the case at Discord,” he says. “We’ve only continued to expand our investment over the past couple of years.”

https://www.fastcompany.com/90969656/discord-is-implementing-a-flexible-approach-to-content-moderation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Oct 19, 2023, 3:30:09 PM


Login to add comment

Other posts in this group

California’s location data privacy bill aims to reshape digital consent

Amid the ongoing evolution of digital privacy laws, one California proposal is drawing heightened attention from legal scholars, technologists, and privacy advocates.

May 13, 2025, 12:30:04 PM | Fast company - tech
Apple’s App Store is getting ‘nutrition labels’ for accessibility

You can learn a lot about an app before you download it from Apple’s App Store, such as what other users think of it, the access it

May 13, 2025, 12:30:04 PM | Fast company - tech
Anaconda launches an AI platform to become the GitHub of enterprise open-source development

AI integration remains a top priority across enterprises worldwide, yet success remains elusive despite widespread enthusiasm and significant investment. An

May 13, 2025, 12:30:03 PM | Fast company - tech
Going ‘AI-first’ appears to be backfiring on Klarna and Duolingo

Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.

Klarna and Duloingo

May 12, 2025, 8:20:01 PM | Fast company - tech
Lyft CEO David Risher on competing with Uber and the future of rideshare

The rideshare market has reached a crossroads. Autonomous vehicles are on the rise, driver unrest is mounting, and customers are questioning everything from pricing to trust and safety. In the mid

May 12, 2025, 5:50:04 PM | Fast company - tech
Tech billionaires’ plan for a new California city may bypass voter approval

A group backed by tech billionaires spent years and $800 million secretly buying up over 60,

May 12, 2025, 1:20:04 PM | Fast company - tech