this post was submitted on 09 Jan 2025
30 points (96.9% liked)

Selfhosted

40956 readers
1099 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Now that we know AI bots will ignore robots.txt and churn residential IP addresses to scrape websites, does anyone know of a method to block them that doesn't entail handing over your website to Cloudflare?

top 28 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 3 hours ago (1 children)

Its not AI but take a look at nG-firewall, it blocks most know unwanted stuff and gets regular updates.:

https://perishablepress.com/ng-firewall/

[–] [email protected] 1 points 3 hours ago

Will check this out. Thanks!

[–] [email protected] 9 points 5 hours ago (2 children)

I am currently watching several malicious crawlers be stuck in a 404 hole I created. Check it out yourself at https://drkt.eu/asdfasd

I respond to all 404s with a 200 and then serve them that page full of juicy bot targets. A lot of bots can't get out of it and I'm hoping that the driveby bots that look for login pages simply mark it (because it responded with 200 instead of 404) so a real human has to go and check and waste their time.

[–] [email protected] 2 points 2 hours ago

This is pretty slick, but doesn't this just mean the bots hammer your server looping forever? How much processing do you do of those forms for example?

[–] [email protected] 2 points 2 hours ago

That's pretty neat. Thanks!

[–] [email protected] 13 points 9 hours ago (2 children)

If you're running nginx I am using the following:

if ($http_user_agent ~* "SemrushBot|Semrush|AhrefsBot|MJ12bot|YandexBot|YandexImages|MegaIndex.ru|BLEXbot|BLEXBot|ZoominfoBot|YaK|VelenPublicWebCrawler|SentiBot|Vagabondo|SEOkicks|SEOkicks-Robot|mtbot/1.1.0i|SeznamBot|DotBot|Cliqzbot|coccocbot|python|Scrap|SiteCheck-sitecrawl|MauiBot|Java|GumGum|Clickagy|AspiegelBot|Yandex|TkBot|CCBot|Qwantify|MBCrawler|serpstatbot|AwarioSmartBot|Semantici|ScholarBot|proximic|MojeekBot|GrapeshotCrawler|IAScrawler|linkdexbot|contxbot|PlurkBot|PaperLiBot|BomboraBot|Leikibot|weborama-fetcher|NTENTbot|Screaming Frog SEO Spider|admantx-usaspb|Eyeotabot|VoluumDSP-content-bot|SirdataBot|adbeat_bot|TTD-Content|admantx|Nimbostratus-Bot|Mail.RU_Bot|Quantcastboti|Onespot-ScraperBot|Taboolabot|Baidu|Jobboerse|VoilaBot|Sogou|Jyxobot|Exabot|ZGrab|Proximi|Sosospider|Accoona|aiHitBot|Genieo|BecomeBot|ConveraCrawler|NerdyBot|OutclicksBot|findlinks|JikeSpider|Gigabot|CatchBot|Huaweisymantecspider|Offline Explorer|SiteSnagger|TeleportPro|WebCopier|WebReaper|WebStripper|WebZIP|Xaldon_WebSpider|BackDoorBot|AITCSRoboti|Arachnophilia|BackRub|BlowFishi|perl|CherryPicker|CyberSpyder|EmailCollector|Foobot|GetURL|httplib|HTTrack|LinkScan|Openbot|Snooper|SuperBot|URLSpiderPro|MAZBot|EchoboxBot|SerendeputyBot|LivelapBot|linkfluence.com|TweetmemeBot|LinkisBot|CrowdTanglebot|ClaudeBot|Bytespider|ImagesiftBot|Barkrowler|DataForSeoBo|Amazonbot|facebookexternalhit|meta-externalagent|FriendlyCrawler|GoogleOther|PetalBot|Applebot") { return 403; }

That will block those that actually use recognisable user agents. I add any I find as I go on. It will catch a lot!

I also have a huuuuuge IP based block list (generated by adding all ranges returned from looking up the following AS numbers):

AS45102 (Alibaba cloud) AS136907 (Huawei SG) AS132203 (Tencent) AS32934 (Facebook)

Since these guys run or have run bots that impersonate real browser agents.

There are various tools online to return prefix/ip lists for an autonomous system number.

I put both into a single file and include it into my web site config files.

EDIT: Just to add, keeping on top of this is a full time job!

[–] [email protected] 1 points 3 hours ago

See my other comment, nG-firewall does exactly this and more.

https://perishablepress.com/ng-firewall/

[–] [email protected] 4 points 6 hours ago (1 children)

Thank you for the detailed reply.

keeping on top of this is a full time job!

I guess that's why I'm interested in a tooling based solution. My selfhosting is small-fry junk, but a lot of others like me are hosting entire fedi communities or larger websites.

[–] [email protected] 4 points 6 hours ago (1 children)

Yeah, I probably should look to see if there's any good plugins that do this on some community submission basis. Because yes, it's a pain to keep up with whatever trick they're doing next.

And unlike web crawlers that generally check a url here and there, AI bots absolutely rip through your sites like something rabid.

[–] [email protected] 2 points 6 hours ago (1 children)

AI bots absolutely rip through your sites like something rabid.

SemrushBot being the most rabid from my experience. Just will not take "fuck off" as an answer.

That looks pretty much like how I'm doing it, also as an include for each virtual host. The only difference is I don't even bother with a 403. I just use Nginx's 444 "response" to immediately close the connection.

Are you doing the IP blocks also in Nginx or lower at the firewall level? Currently I'm doing it at firewall level since many of those will also attempt SSH brute forces (good luck since I only use keys, but still....)

[–] [email protected] 3 points 6 hours ago

So on my mbin instance, it's on cloudflare. So I filter the AS numbers there. Don't even reach my server.

On the sites that aren't behind cloudflare. Yep it's on the nginx level. I did consider firewall level. Maybe just make a specific chain for it. But since I was blocking at the nginx level I just did it there for now. I mean it keeps them off the content, but yes it does tell them there's a website there to leech if they change their tactics for example.

You need to block the whole ASN too. Those that are using chrome/firefox UAs change IP every 5 minutes from a random other one in their huuuuuge pools.

[–] [email protected] 7 points 11 hours ago

Maybe crowdsec could add a list for blocking scraping for LLMs

https://app.crowdsec.net/blocklists/search?page=1

[–] [email protected] 8 points 12 hours ago (1 children)

I run [email protected] and bypassing cloudflair, paywalls, anti bot filters, etc is way easyer compared to what anyone thinks.

Their is no escape from web scrapers. Best u can do is poison ur images and obfuscate the page source.

[–] [email protected] 2 points 6 hours ago

In that case I'm interested in tools to automate doing that.

[–] [email protected] 6 points 11 hours ago* (last edited 1 hour ago) (1 children)

The only way I can think of is blacklisting everything by default, directing to a challanging proper captcha (can be selfhosted) and temporarily whitelisting proven human IPs.

When you try to "enumerate badness" and block all AI useragents and IP ranges, you'll always leave some new ones through and you'll never be done with adding them.

Only allow proven humans.


A captcha will inconvenience the users. If you just want to make it worse for the crawlers, let them spend compute ressources through something like https://altcha.org/ (which would still allow them to crawl your site, but make DDoSing very expensive) or AI honeypots.

[–] [email protected] 4 points 6 hours ago* (last edited 6 hours ago)

I hadn't heard of that before, thanks for the link.

I haven't read through the docs yet... But PoW makes me wonder what the work is and if it's cryptocurrency related.

Edit: Found it: https://altcha.org/docs/proof-of-work/

[–] [email protected] 6 points 12 hours ago (1 children)

Perhaps feed the convincing fake data so they don't realize they've been IP banned/used agent filtered.

[–] [email protected] 5 points 6 hours ago

A commenter in the hackernews post has created this: https://marcusb.org/hacks/quixotic.html

I'm interested, but it seems like an easy way for bots to exhaust your own server resources before they give up crawling.

[–] [email protected] 6 points 13 hours ago (2 children)

If I'm reading your link right, they are using user agents. Granted there's a lot. Maybe you could whitelist user agents you approve of? Or one of the commenters had a list that you could block. Nginx would be able to handle that.

[–] [email protected] 7 points 13 hours ago

They just Fake User Agents If you Block them

[–] [email protected] 1 points 13 hours ago (1 children)

Thank you for the reply, but at least one commenter claims they'll impersonate Chrome UAs.

[–] [email protected] 10 points 12 hours ago* (last edited 12 hours ago) (1 children)

You can read more Here

If you try to rate-limit them, they'll just switch to other IPs all the time. If you try to block them by User Agent string, they'll just switch to a non-bot UA string (no, really). This is literally a DDoS on the entire internet.

https://pod.geraspora.de/posts/17342163

[–] [email protected] 2 points 10 hours ago (1 children)

Except it's not denying service, so it's just a D.

[–] [email protected] 6 points 6 hours ago

In the hackernews comments for that geraspora link people discussed websites shutting down due to hosting costs, which may be attributed in part to the overly aggressive crawling. So maybe it's just a different form of DDOS than we're used to.

[–] [email protected] 4 points 13 hours ago (2 children)

The only way I can think of is require users to authenticate themselves, but this isn't much of a hurdle.

To get into the details of it, what do you define as an AI bot? Are you worried about scrappers grabbing the contents of you website? What is the activities of an "AI Bot". Are you worried about AI bots registering and using your platform?

The real answer is not even cloudflare will fully defend you from this. If anything cloudflare is just making sure they get paid for access to your website by AI scappers. As someone who has worked around bot protections (albeit in a different context than web scrapping), it's a game of cat and mouse. If you or some company you hire are not actively working against automated access, you lose as the other side is active.

Just think of your point that they are using residential IP addresses. How do they get these addresses? They provide addons/extensions for browsers that offer some service (generally free VPNs) in exchange for access to your PC and therefore your internet in the contract you agree to. The same can be used by any addon, and if the addon has permissions to read any website they can scrape those websites using legit users for whatever purposes they want. The recent exposure of the Honey scam highlights this, as it's very easy to get users to install addons by selling users they might save a small amount of money (or make money for other programs). There will be users who are compromised by addons/extensions or even just viruses that will be able to extract the data you are trying to protect.

[–] [email protected] 2 points 2 hours ago (1 children)

Just think of your point that they are using residential IP addresses. How do they get these addresses?

You can ping all of the ipv4 addresses in under an hour. If all you're looking for is publicly available words written by people, you only have to poke port 80 and then suddenly you have practically every possible small self-hosted website out there.

[–] [email protected] 1 points 12 minutes ago* (last edited 11 minutes ago)

When I say residential IP addresses, I mostly mean proxies using residential IPs, which allow scrappers to mask themselves as organic traffic.

Edit: Your point stands on there are a lot of services without these protections in place, but a lot of services are protective against scrapping.

[–] [email protected] 1 points 6 hours ago

Thank you for the detailed response. It's disheartening to consider the traffic is coming from 'real' browsers/IPs, but that actually makes a lot of sense.

I'm coming at this from the angle of AI bots ingesting a website over and over to obsessively look for new content.

My understanding is there are two reasons to try blocking this: to protect bandwidth from aggressive crawling, or to protect the page contents from AI ingestion. I think the former is doable, and the latter is an unwinnable task. My personal reason is because I'm an AI curmudgeon, I'd rather spend CPU resources blocking bots than serving any content to them.