this post was submitted on 29 Jan 2025
714 points (96.7% liked)

Technology

73698 readers
3254 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."

Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

top 50 comments
sorted by: hot top controversial new old
[–] bizarroland@fedia.io 348 points 6 months ago (3 children)

They're framing it as "AI haters" instead of what it actually is, which is people who do not like that robots have been programmed to completely ignore the robots.txt files on a website.

No AI system in the world would get stuck in this if it simply obeyed the robots.txt files.

[–] deur@feddit.nl 165 points 6 months ago

The disingenuous phrasing is like "pro life" instead of what it is, "anti-choice"

[–] Semi_Hemi_Demigod@lemmy.world 24 points 6 months ago

Waiting for Apache or Nginx to import a robots.txt and send crawlers down a rabbit hole instead of trusting them.

[–] AwesomeLowlander@sh.itjust.works 11 points 6 months ago

The internet being what it is, I'd be more surprised if there wasn't already a website set up somewhere with a malicious robots.txt file to screw over ANY crawler regardless of providence.

[–] Olgratin_Magmatoe@lemmy.world 80 points 6 months ago (3 children)
[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 45 points 6 months ago (2 children)

AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck"

Maybe against bad crawlers. If you know what you're trying to look for and just just trying to grab anything and everything this should not be very effective. Any good web crawler has limits. This seems to be targeted. This seems to be targeted at Facebooks apparently very dumb web crawler.

[–] magnus919@lemmy.brandyapple.com 9 points 6 months ago

Yeah I was just thinking... this is not at all how the tools work.

[–] micka190@lemmy.world 4 points 6 months ago* (last edited 6 months ago) (1 children)

Any good web crawler has limits.

Yeah. Like, literally just:

  • Keep track of which URLs you've been to
  • Avoid going back to the same URL
  • Set a soft limit, once you've hit it, start comparing the contents of the page with the previous one (to avoid things like dynamic URLs taking you to the same content)
  • Set a hard limit, once you hit it, leave the domain altogether

What kind of lazy-ass crawler doesn't even do that?

load more comments (1 replies)
[–] cm0002@lemmy.world 34 points 6 months ago (3 children)

It might be initially, but they'll figure out a way around it soon enough.

Remember those articles about "poisoning" images? Didn't get very far on that either

[–] Traister101@lemmy.today 31 points 6 months ago (1 children)

The way to get around it is respecting robots.txt lol

[–] cm0002@lemmy.world 23 points 6 months ago

But that's not respecting the shareholders 😤

[–] EldritchFeminity@lemmy.blahaj.zone 12 points 6 months ago

This kind of stuff has always been an endless war of escalation, the same as any kind of security. There was a period of time where all it took to mess with Gen AI was artists uploading images of large circles or something with random tags to their social media accounts. People ended up with random bits of stop signs and stuff in their generated images for like a week. Now, artists are moving to sites that treat AI scrapers like malware attacks and degrading the quality of the images that they upload.

[–] ubergeek@lemmy.today 4 points 6 months ago

The poisoned images work very well. We just haven't hit the problem yet, because a) not many people are poisoning their images yet and b) training data sets were cut off at 2021, before poison pills were created.

But, the easy way to get around this is to respect web standards, like robots.txt

[–] rumba@lemmy.zip 5 points 6 months ago (2 children)

It's not. If it was, every search engine out there would be belly up at the first nested link.

Google/Bing just consume their own crawling traffic. You don't want to NOT show up in search queries right?

[–] pelespirit@sh.itjust.works 17 points 6 months ago (1 children)

It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI "has been quite vigilant" and excels at detecting the "first signs of data poisoning attempts."

Despite these efforts, he concluded that data poisoning was "a serious threat to machine learning models." And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

"A link to a Nepenthes location from your site will flood out valid URLs within your site's domain name, making it unlikely the crawler will access real content," a Nepenthes explainer reads.

[–] rumba@lemmy.zip 8 points 6 months ago

Same problems with tarpitting. They search engines are doing the crawling for each of their own companies, you don't want to poison your own search results.

Conceptually, they'll stop being search crawls altogether and if you expect to get any traffic it'll come from AI crawls :/

[–] ubergeek@lemmy.today 4 points 6 months ago (3 children)

You don’t want to NOT show up in search queries right?

At this point?

I am fully ok NOT being in search engines for any of my sites. Organic traffic has always been much more valuable than inorganic traffic.

load more comments (3 replies)
[–] just_another_person@lemmy.world 58 points 6 months ago
[–] pHr34kY@lemmy.world 49 points 6 months ago* (last edited 6 months ago) (1 children)

I am so gonna deploy this. I want the crawlers to index the entire Mandelbrot set.

I'll train with with lyrics from Beck Hansen and Smash Mouth so that none of it makes sense.

[–] masterofn001@lemmy.ca 23 points 6 months ago (1 children)

This is the song that never ends.
It goes on and on my friends.

[–] Pollo_Jack@lemmy.world 9 points 6 months ago

Well the hits start coming and they dont start ending

[–] aesthelete@lemmy.world 49 points 6 months ago

Notice how it's "AI haters" and not "people trying to protect their IP" as it would be if it were say...China instead of AI companies stealing the IP.

[–] nullPointer@programming.dev 22 points 6 months ago* (last edited 6 months ago) (3 children)

why bother wasting resources with the infinite maze and just do what the old school .htaccess bot-traps do; ban any IP that hits the nono-zone defined in robots.txt?

[–] IllNess 55 points 6 months ago (1 children)

That's the reason for the maze. These companies have multiple IP addresses and bots that communicate with each other.

They can go through multiple entries in the robot.txt file. Once they learn they are banned, they go scrape the old fashioned way with another IP address.

But if you create a maze, they just continually scrape useless data, rather than scraping data you don't want them to get.

[–] nullPointer@programming.dev 5 points 6 months ago (2 children)

if they are stupid and scrape serially. the AI can have one "thread" caught in the tar while other "threads" continues to steal your content.

with a ban they would have to keep track of what banned them to not hit it again and get yet another of their IP range banned.

[–] IllNess 17 points 6 months ago

Banning IP ranges isn't going to work. A lot of these companies rent out home IP addresses.

Also the point isn't just protecting content, it's data poisoning.

[–] partial_accumen@lemmy.world 7 points 6 months ago (1 children)

if they are stupid and scrape serially. the AI can have one “thread” caught in the tar while other “threads” continues to steal your content.

Why would it be only one thread stuck in the tarpit? If the tarpit maze has more than one choice (like a forked road) then the AI would have to spawn another thread to follow that path, yes? Then another thread would be spawned at the next fork in the road. Ad infinitum until the AI stops spawning threads or exhausts the resources of the web server (a DOS).

load more comments (1 replies)
[–] pelespirit@sh.itjust.works 10 points 6 months ago
load more comments (1 replies)
[–] rustyfish@lemmy.world 6 points 6 months ago

Oh I love this!

[–] Docus@lemmy.world 3 points 6 months ago (3 children)

Does it also trap search engine crawlers? That would be a problem

[–] independantiste@sh.itjust.works 33 points 6 months ago

The big search engine crawlers like googles or Microsoft's should respect your robots.txt file. This trick affects those who don't honor the file and just scrape your website even if you told it not to

[–] Soup@lemmy.world 17 points 6 months ago (1 children)

I imagine if those obey the robots.txt thing that it’s not a problem.

[–] draughtcyclist@lemmy.world 7 points 6 months ago

Don't make me tap the sign

load more comments (1 replies)
load more comments
view more: next ›