339
submitted 1 month ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 73 points 1 month ago

Meanwhile, the POV bots should be getting:

(I have to set it one up for my Fediverse stuff one of these days as well)

[-] [email protected] 38 points 1 month ago

I keep seeing this on serious sites and it makes me happy

[-] [email protected] 10 points 1 month ago

Such a weird thing that it essentially discriminates Mozilla based browsers though, I'd expect bots would follow the most-used-approach. So yeah, this does not make me happy..although the anime-girl kinda does

[-] [email protected] 16 points 1 month ago

It doesn't discriminate Mozilla based browsers. It checks if the User-Agent string contains "Mozilla".

Due to historical reasons, every browser (and software pretending to be a browser) has "Mozilla" in it's User-Agent string.

This is a User-Agent string for Google Chrome on Windows 10:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36
[-] [email protected] 4 points 1 month ago

What's the problem with Gecko browsers exactly? The only issue I have is disabling JShelter for new domains.

[-] [email protected] 4 points 1 month ago

Is it blocking you? I pretty much exclusively use Gecko at this point and don't have an issue yet.

[-] [email protected] 3 points 1 month ago

At first I was getting it for some proxy services and fediverse services, and didn't think much of it cause I thought it was just some thing small projects used instead of cloudflare/google. But yeah now I've been seeing it on more "official" websites and I'm happy about it after I took time to read their github page.

I especially love it since I don't have to cry over failing 30 "click the sidewalk" captchas in a row for daring to use a VPN + uBlock + Librewolf to look at a single page of search results. I can sit on my ass for 5 sec and breeze through, assured that I'm not a robot 🥹

[-] [email protected] 16 points 1 month ago
[-] [email protected] 25 points 1 month ago
[-] [email protected] 14 points 1 month ago

It was created by Xe Iaso in response to Amazon's web crawler overloading their Git server, as it did not respect the robots.txt exclusion protocol and would work around restrictions.

Jeff wouldn’t do that!

[-] [email protected] 9 points 1 month ago

Even a wikipedia page lmfao

[-] [email protected] 12 points 1 month ago* (last edited 1 month ago)

If humans can't view the page, so won't bots.

this post was submitted on 10 Jun 2025
339 points (98.6% liked)

Memes

51599 readers
2295 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS