Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      47
      ·
      2 days ago

      AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck”

      Maybe against bad crawlers. If you know what you’re trying to look for and just just trying to grab anything and everything this should not be very effective. Any good web crawler has limits. This seems to be targeted. This seems to be targeted at Facebooks apparently very dumb web crawler.

      • micka190@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 hours ago

        Any good web crawler has limits.

        Yeah. Like, literally just:

        • Keep track of which URLs you’ve been to
        • Avoid going back to the same URL
        • Set a soft limit, once you’ve hit it, start comparing the contents of the page with the previous one (to avoid things like dynamic URLs taking you to the same content)
        • Set a hard limit, once you hit it, leave the domain altogether

        What kind of lazy-ass crawler doesn’t even do that?

        • skulblaka@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          The way I understand it, the hard limit to leave the domain is actually the only one of these rules that would trigger on Nepenthes. The tar pit keeps generating new linked pages full of trash.

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      ·
      2 days ago

      It might be initially, but they’ll figure out a way around it soon enough.

      Remember those articles about “poisoning” images? Didn’t get very far on that either

      • ubergeek@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        The poisoned images work very well. We just haven’t hit the problem yet, because a) not many people are poisoning their images yet and b) training data sets were cut off at 2021, before poison pills were created.

        But, the easy way to get around this is to respect web standards, like robots.txt

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        This kind of stuff has always been an endless war of escalation, the same as any kind of security. There was a period of time where all it took to mess with Gen AI was artists uploading images of large circles or something with random tags to their social media accounts. People ended up with random bits of stop signs and stuff in their generated images for like a week. Now, artists are moving to sites that treat AI scrapers like malware attacks and degrading the quality of the images that they upload.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      2 days ago

      It’s not. If it was, every search engine out there would be belly up at the first nested link.

      Google/Bing just consume their own crawling traffic. You don’t want to NOT show up in search queries right?

      • ubergeek@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        You don’t want to NOT show up in search queries right?

        At this point?

        I am fully ok NOT being in search engines for any of my sites. Organic traffic has always been much more valuable than inorganic traffic.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 hours ago

          Your definition of organic traffic is off-standard. When people say organic, they generally mean non-paid, including returns on web search.

          The VAST majority of the web would have almost no traffic without web searches. It’s not like people flock to sites from talking about it around the water cooler.

          • ubergeek@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            Your definition of organic traffic is off-standard.

            Fair.

            The VAST majority of the web would have almost no traffic without web searches. It’s not like people flock to sites from talking about it around the water cooler.

            Which is a shame, tbh. We had far better content, when people had to work to create good content, that others wanted, and got passed around.

            ie, in school, before search engines, we all knew about Whitehouse.com… We all knew the sites that had the info we wanted/needed at the time.

            In fact, I’d argue the downfall of the web as an actual useful tool came about once search engines automatically started indexing, rather than submitting site maps to a page like OpenDirectory to have your site cataloged, indexed, and sorted into appropriate categories by a human.

            Because once people started working on “gaming algos” rather than “Making super good content”, the internet just became the new “Malls” where you weren’t expected to learn, you were just expected to buy.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 hours ago

              I liked it back when link aggregators were the go-to for discovery. You could have sites that were real gems that were just tucked away.

              I think the indexing started out ok. Counting backlinks and using that as a ranking was pretty genius, right up until people realized they could game the system, then google realized that artificially screwing with their own system was worth money, then the used ads to modify ranking.

              ads to modify discoverability the death of free internet

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          “Web Scrapers: Many web scrapers and bots do not respect robots.txt at all, as they are often designed to extract data regardless of the site’s crawling policies. This can include malicious bots or those used for data mining.”

      • pelespirit@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

        Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

        “A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          2 days ago

          Same problems with tarpitting. They search engines are doing the crawling for each of their own companies, you don’t want to poison your own search results.

          Conceptually, they’ll stop being search crawls altogether and if you expect to get any traffic it’ll come from AI crawls :/

          • umami_wasabi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            I think to use it defensively, you should put the path into robots.txt, and only those doesn’t follows the rule will be greeted with the maze. For proper search engine crawler, that’s should be the standard behavior.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 days ago

              Spiders already detect link bombs, recursion bombs, they’re capable of rendering the page out in memory to see what’s truly visible.

              It’s a great idea but it’s a really old trick and it’s already been covered.