Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

  • _cryptagion@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    So instead of the AI wasting your resources and money by ignoring your robots.txt, you’re going to waste your own resources and money by inviting them to increase their load on your server, but make it permanent and nonstop. Brilliant. Hey, even better, you should host your site on something that charges you based on usage, that’ll really show the AI makers who is boss. 🤣

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      3 days ago

      It’s already permanent and nonstop. They’re known to ignore robots.txt, and remove user agent on detection.

      And the goal is not only to prevent resource abuse, but break a predatory model.

      But, feel free to continue gracefully doing nothing while other takes action, it’s bound to help eventually.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Hey, you don’t need to convince me, you’ve clearly already committed to bravely sacrificing your own time and money in this valiant fight. Go get ‘em, tiger! I look forward to the articles about AI being stopped coming out any day now.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      Not like you can load balance requests of the malicious subdirectories to a non-prod hardware. Can be decommissioned hardware.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        How many hobby website admins have load balancing for their small sites? How many have decommissioned hardware? Because if you find me a corporation wiling to accept the liability doing something like this could open them up to, I’ll pay you a million dollars.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        One or two sysadmins using this isn’t going to be noticeable, and even if it was, the solution would be an inline edit to add a depth limit to links. The fix wouldn’t even take thirty seconds to edit your algorithm to completely defeat this.

        Not to mention, OpenAI or whatever company that got caught in one of these could sue the site. They might not win, but how many people running hobby sites who are stupid enough to do this are going to have thousands of dollars on hand to fight a lawsuit from a company worth billions with a whole team of lawyers? You gonna start a GoFundMe for them or something?

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      Serving a pipe from ChatGPT into and AI scraping your site uses little server resources.

      • _cryptagion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        If you’re piping ChatGPT into AI scrapers, you’re paying ChatGPT for the privilege. So to defeat the AI… you’re joining the AI. It all sounds like the plot of a bad sci-fi movie.

        • ubergeek@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Nah, you just scrape chatgpt.

          I don’t pay right now to hor their chat app, so I’d just integrate with that.

          Not very hard to do, tbh, with curl or a library like libcurl.