More than 200 Substack authors asked the platform to explain why it’s “platforming and monetizing Nazis,” and now they have an answer straight from co-founder Hamish McKenzie:

I just want to make it clear that we don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don’t think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.

While McKenzie offers no evidence to back these ideas, this tracks with the company’s previous stance on taking a hands-off approach to moderation. In April, Substack CEO Chris Best appeared on the Decoder podcast and refused to answer moderation questions. “We’re not going to get into specific ‘would you or won’t you’ content moderation questions” over the issue of overt racism being published on the platform, Best said. McKenzie followed up later with a similar statement to the one today, saying “we don’t like or condone bigotry in any form.”

  • user91@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    If all the content on those instances was ai generated then your hot take could be taken seriously. We all know it’s not.

    • Sanyanov@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’m talking specifically about instances with strong rules, either prohibiting any child imagery or only allowing drawings (which is just about any anti-contact place). Both types are heavily defederated from, and barely anyone makes a difference between that and literal child porn instances (which should be not just defederated, but seized by authorities and admins brought to justice)

      I’ve updated third bullet point in accordance with your comment, thank you.