Substack is back in the news lately, though this time it’s not for looming money problems. It’s for worse problems—Nazi problems. At the end of November, The Atlantic published a piece by writer Jonathan Katz titled simply, “Substack has a Nazi problem.” His argument was essentially that the newsletter-publishing platform hosts, so therefore profits from, a larger number of white-supremacy newsletters than he—and for that matter, most regular readers—probably ever expected.
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the Sonnenrad, in their logos or in prominent graphics. Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate . . . A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory . . . Other newsletters make prominent references to the “Jewish Question.”
Some of those newsletters have tens of thousands of subscribers, Katz wrote. And many accept paid subscriptions, meaning that Substack, which pockets 10% of writers’ subscription revenues, earns money when readers pay for Nazi newsletters.
Calling themselves “Substackers Against Nazis,” more than 100 of the platform’s publishers today released an open letter calling for Substack’s leaders to be direct about where, exactly, they stand on the question of Nazis.
“We’re asking a very simple question that has somehow been made complicated: Why are you platforming and monetizing Nazis?” the group writes. “We, your publishers, want to hear from you on the official Substack newsletter. Is platforming Nazis part of your vision of success? Let us know—from there we can each decide if this is still where we want to be.”
Since its inception six years ago, Substack has staked out an intentionally hands-off approach to content moderation—a process that undeniably is a pain. That laxness attracted canceled journalists like Glenn Greenwald, Matt Taibbi, and Andrew Sullivan. But it has been fodder for critics who argue that Substack is complicit in permitting more than simply ideas they disagree with; disagreeable content, they argue, must surely cross a line into being hateful, dangerous, possibly even illegal, and likely also in violation of Substack’s own terms of service.
Substack declined to respond to Substackers Against Nazis’ letter, but directed Fast Company to a separate open letter, written by journalist Elle Griffin, that, as of Thursday, had been signed by more than 50 publishers on the platform to signal their support of Substack’s current moderation policies. The list of signers included Matt Taibbi, Free Press founders Bari Weiss and Nellie Bowles, and the biologist Richard Dawkins.
Substack also gave Fast Company a statement echoing what it told The Atlantic: “Substack is a platform that is built on freedom of expression, and helping writers publish what they want to write. Some of that writing is going to be objectionable or offensive. Substack has a content moderation policy that protects against extremes—like incitements to violence—but we do not subjectively censor writers outside of those policies.”
A number contend that the platform, though, has been loathe to give its stated policies any teeth. That’s despite Substack’s content-moderation standards sounding very similar to the policies set by other platforms for hate speech:
We do not allow hate, defined as publishing content or funding initiatives that call for violence, exclusion, or segregation based on protected classes. This does include serious attacks on people based on race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability, or medical condition. It does not include attacks on ideas, ideologies, organizations, or individuals for other reasons, even if those attacks are cruel or unfair.
On this point, Substackers Against Nazis again echo Katz’s Atlantic piece, asking two questions they say are intertwined: What sort of community is Substack actively hoping to cultivate? And how long does it believe writers will continue staking their reputations to and sharing revenue with a company that won’t classify swastikas as hate speech?
Substack’s creators are on record stating, unambiguously, that they do not want newsletter writers to be sucky hateful racists. But the problem is, they just aren’t sure it’s their job to stop them. The clearest example of the creators grappling with this conundrum occurred earlier this year, when CEO Chris Best went on Verge editor-in-chief Nilay Patel’s podcast.
Patel asked what Substack would do if a newsletter called on America to kick out “brown people” like him. It took Best an uncomfortable length of time to respond; and even then, he didn’t exactly answer Patel’s question (“we caught some heat” for that, Substack said). Best objected that it was a “gotcha” question, but Patel noted how making such decisions seems to be very much “the thing that you have to do” in order to create an online publishing platform.
Later, Best’s cofounder Hamish McKenzie tried to walk back that objection: “We wish that interview had gone better and that Chris had more clearly represented our position in that moment . . . Just in case anyone is ever in any doubt: We don’t like or condone bigotry in any form.”
But since McKenzie’s statement, Substack hasn’t laid out clear new steps for moderating bigoted speech or other harmful conduct. As Substackers Against Nazis sees it, the platform may not “like” or “condone” it, but Substack does appear to be getting richer off hate speech. The group quotes a colleague—Casey Newton, of the popular tech newsletter, Platformer—in order to quantify how much money Substack ought to be making from hate speech: “The correct number of newsletters using Nazi symbols that you host and profit from on your platform is zero.”
Their letter ends by saying that if more serious efforts aren’t taken to de-platform and de-monetize content that appears to violate Substack’s terms of service, they’re ready to follow others who have taken their subscribers elsewhere in the past couple years.
Login to add comment
Other posts in this group

Meta has spent 15 years shunning the iPad. Now, it seems they’re finally ready to embrace the tablet lovers.
WhatsApp users can finally text from the big screen. On Tuesday, Meta a

According to new research from Whop, a marketplace for digital products, one in three Gen Z consumers now make purchasing decisions based on recommendations from AI-generated influencers.

Big U.S. banks are holding internal discussions about expanding into cryptocurrencies as they get stronger endorsements from regulators, but initial steps will be tentative, centering on pilot pro

I can tell you the exact moment when a new browser called Deta Surf clicked for me.
I was getting a demo from Deta cofounder Max Eusterbrock, and he showed me how

Two romantasy authors have publicly defended their use of artificial intelligence after being caught with AI-generated prompts left in their published works. While their readers are far from impre

After back-to-back explosions, SpaceX launched its

Chris Rogers, Instacart’s current chief business officer, is taking over as the delivery giant’s next CEO, the company announced on Wednesday.
Rogers, who has worked at Insta