Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

  • _cryptagion@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    33
    ·
    4 hours ago

    So instead of the AI wasting your resources and money by ignoring your robots.txt, you’re going to waste your own resources and money by inviting them to increase their load on your server, but make it permanent and nonstop. Brilliant. Hey, even better, you should host your site on something that charges you based on usage, that’ll really show the AI makers who is boss. 🤣

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 hour ago

      Not like you can load balance requests of the malicious subdirectories to a non-prod hardware. Can be decommissioned hardware.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 hour ago

        How many hobby website admins have load balancing for their small sites? How many have decommissioned hardware? Because if you find me a corporation wiling to accept the liability doing something like this could open them up to, I’ll pay you a million dollars.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      2 hours ago

      It’s already permanent and nonstop. They’re known to ignore robots.txt, and remove user agent on detection.

      And the goal is not only to prevent resource abuse, but break a predatory model.

      But, feel free to continue gracefully doing nothing while other takes action, it’s bound to help eventually.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        2 hours ago

        Hey, you don’t need to convince me, you’ve clearly already committed to bravely sacrificing your own time and money in this valiant fight. Go get ‘em, tiger! I look forward to the articles about AI being stopped coming out any day now.

    • flying_sheep@lemmy.ml
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 hours ago

      There are different kinds of AI scraper defenses.

      This one is an active strategy. No shit people know that this costs them resources. The point is that they want to punish the owners of bad-behaved scrapers.

      There is also another kind which just blocks anything that tries to follow an invisible link that goes to a resource forbidden by robots.txt

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        3 hours ago

        One or two people using this isn’t going to punish anything, or make enough of a difference to poison the AI. That’s the same phrase all these anti-AI projects for sites and images use, and they forget that, like a vaccine. you have to have the majority of sites using your method in order for it to be effective. And the majority of sysadmins are not going to install what’s basically ICE from Cyberpunk on a production server.

        Once again, it’s lofty claims from the anti-AI crowd, and once again it’s much ado about nothing. But I’m sure that won’t stop people from believing that they’re making a difference by costing themselves money out of spite. 😂

        • theparadox@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 hour ago

          The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

          Ah yes. It’s extremely common for one of the top companies in an industry to spitefully expend resources fighting the irrelevant efforts of…

          One or two people

          Please, continue to grace us with you unbiased wisdom. Clearly you’ve read the article and aren’t just trying to simp for AI or start flame wars like a petulant child.

          • _cryptagion@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 hour ago

            Well, luckily for them, it’s a pretty simple fix. Congrats on being a part of making them jot down a note to prevent tarpitting when they get around to it. You’ve saved the internet!

            And stop pretending like you’re unbiased either. We both have our preconceived notions, and you’re not more likely to be open to change yours than I am. In fact, given the hysterical hyperventilating anti-AI “activists” get to, we both know you’re not ever going to change your mind on AI, and as such you’ll glom onto any small action you think is gonna stick it to the man, no matter whether that action is going to have any practical effect on the push for AI or not.

    • LandedGentry@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 hours ago

      The point is that they are being punished too and will hopefully stop ignoring robot.txt as a result. If your model keeps hitting these things over and over again you’re going to have to change your behavior

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        3 hours ago

        One or two sysadmins using this isn’t going to be noticeable, and even if it was, the solution would be an inline edit to add a depth limit to links. The fix wouldn’t even take thirty seconds to edit your algorithm to completely defeat this.

        Not to mention, OpenAI or whatever company that got caught in one of these could sue the site. They might not win, but how many people running hobby sites who are stupid enough to do this are going to have thousands of dollars on hand to fight a lawsuit from a company worth billions with a whole team of lawyers? You gonna start a GoFundMe for them or something?

        • LandedGentry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 hours ago

          Clearly more than one or two admins are interested in these options I don’t know why you are assuming that’s the whole list of interested people. Not everyone is as eager as you to roll over and take it without protest.

          I’d also like to see OpenAI try and sue Admins in other countries over this. That’d be hilarious.

          • _cryptagion@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            2 hours ago

            Hey, you keep fighting the good fight, you’ve got them on the ropes! You and all your many, many friends!

                • LandedGentry@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 hour ago

                  No you’re being a petulant, naysaying child. Leave us alone and go play with your duplos. Adults are talking.

                  • _cryptagion@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    59 minutes ago

                    Bigotry? From a lemmy user? Never seen it before!

                    If you don’t like what I’m saying, block me and move along. Or report my comments, if you think they’re offensive enough. If I’m breaking a rule or the mods don’t like what I have to say, maybe they’ll remove them, or even ban me from the comm! That’s the limit of your options for getting rid of me though.

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      Serving a pipe from ChatGPT into and AI scraping your site uses little server resources.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        3 hours ago

        If you’re piping ChatGPT into AI scrapers, you’re paying ChatGPT for the privilege. So to defeat the AI… you’re joining the AI. It all sounds like the plot of a bad sci-fi movie.

        • ubergeek@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Nah, you just scrape chatgpt.

          I don’t pay right now to hor their chat app, so I’d just integrate with that.

          Not very hard to do, tbh, with curl or a library like libcurl.