this post was submitted on 29 Oct 2023
76 points (96.3% liked)

Privacy

31987 readers
329 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

Hi, I'm building a personal website and I don't want it to be used to train AI. In my robots.txt file I blocked:

  • ChatGPT-User
  • GPTBot
  • Google-Extended
  • FacebookBot

What bots should I also add? Are there any other ways to block AI bots?

IMPORTANT: I don't want to block search engine crawlers, only bots that are used to train AI.

all 33 comments
sorted by: hot top controversial new old
[–] [email protected] 58 points 1 year ago

FYI, bots and crawlers can simply ignore your robots.txt entirely. This is probably common knowledge around these parts, but I've run into clients at work who thought it was a law or something.

I do like the idea of intentionally polluting the data robots will see, as suggested by this comment. There's no reliable way to block them without also blocking humans, so making the crawled data as useless as possible is a good option.

Just be careful not to also confuse screen readers with that tactic, so that accessibility is maintained for humans. It should be easy enough if you keep your aria attributes filled out appropriately, I imagine.

[–] [email protected] 38 points 1 year ago (4 children)

Pollute your site with nonsense that’s invisible to users. Things like pages that are linked to with invisible links that are just walls and walls of random text.

[–] [email protected] 14 points 1 year ago (1 children)

Good idea. I will made a invisible link to "traps for bots". One trap will show random text, one will be redirect loop and one would be random link generator that will link to itself. I will also make every response randomly slow, for example 0,5 to 1,5 seconds.

Good thing is that I can also block search engine crawlers from accessing only the traps.

[–] [email protected] 4 points 1 year ago

If you're interested in traps, you can add a honeypot to your robots.txt. It comes with some risk of blocking legitimate users, though.

[–] [email protected] 10 points 1 year ago (1 children)
[–] [email protected] 4 points 1 year ago (1 children)

Nice idea, but a lot of random text that user doean't see would slow down the website.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I dont think thats really a big problem. Like simply make every key word useless, somehow automate the process.

There should be a tool for this damn, there is at least one Unicode character that doesnt even display a blank in a damn Terminal.

Like... modern web crap doesnt even load without Javascript or animations. So dont bother a bit more HTML

[–] [email protected] 9 points 1 year ago (1 children)

OP still wants search indexing, in which case it's a big no-no - it can be perceived as spam by search engines, and links your pages to tons of unrelated keywords.

[–] [email protected] 8 points 1 year ago

I can block search engine crawlers from specific paths so that should be solved.

[–] [email protected] 3 points 1 year ago

As long as you do not rely on SEO to get traffic. This has a good chance of affecting how Google sees your site as well.

[–] [email protected] 27 points 1 year ago
[–] [email protected] 13 points 1 year ago

I’m curious about how to verify that these bots respect the rules. I don’t doubt that they do, since it might be a PR nightmare for these big tech companies if they don’t, but I don’t know how to verify them. Asking because I’m also doing this for my website.

By the way, LLMs are usually also trained by common crawl, (not sure to what extent), but I’m not sure whether you want to block common crawl.

Another thing to consider is whether your website is indexed and crawled by web archive, and whether web archive has some policy on AI bot crawlers and scrapers.

[–] [email protected] 7 points 1 year ago

Block everyone but the crawlers you like. Blacklists are less reliable than whitelists

[–] [email protected] 6 points 1 year ago (2 children)

I have a personal site.

It isn't great. Don't even have a domain name. My robots.txt is here

https://bbbhltz.codeberg.page/robots.txt

Why bother? I just don't agree with AI.

[–] [email protected] 4 points 1 year ago (2 children)

Specifically what about AI don't you agree with?

[–] [email protected] 4 points 1 year ago (1 children)

Mostly the hype and because artists and creators are being hurt by its existence.

I feel as though using AI is a cop-out. If I want to do something good, I also want to be proud of it. So I would rather not take that away from myself by doing it with AI. However, progress marches on, and I am neither an expert nor an authority on the subject. Asking someone like myself that question is nearly a trap. If I tell you that Generative AI is a bubble, like cryptocurrency and the Metaverse, that is just my gut feeling.

[–] [email protected] 2 points 1 year ago (1 children)

How about a bubble like the internet? 90% of dotcoms failed in the 90s but the internet is alive and strong today. AI is just a tool, and from my experience an extremely useful one.

[–] [email protected] 1 points 1 year ago (1 children)

I get that argument. Perhaps the fact that I'm a professor influences my thinking. And, since we are in a privacy community, something like ChatGPT and privacy don't mix.

Meredith Whittaker (Signal) says^1:

The Venn diagram of privacy concerns and AI concerns is a circle

(I do keep on eye on their progress because it is interesting https://benchmarks.llmonitor.com/)

[–] [email protected] 2 points 1 year ago

Agreed that privacy can be a concern. Ideally it will be possible to run LLMs locally in the near future, but we'll see.

[–] [email protected] 0 points 1 year ago

I was about to ask the same question. It's one thing to think of the potential impacts of AI technology, but to be "against AI" in the most general sense is, to me, a weird concept, especially considering AI is so many things.

[–] [email protected] 1 points 1 year ago (1 children)

Nice, thats what I am looking for!

[–] [email protected] 2 points 1 year ago (1 children)

I don't remember what all of those are for so you might want to look them up.

[–] [email protected] 2 points 1 year ago

I did, most of them are used for AI or business search engines. I copied everything except Yandex.

[–] [email protected] 5 points 1 year ago

Maybe there's some IP address ranges to try block?

It's difficult because, for example, blocking the addresses OpenAI's crawlers use may inadvertently block addresses from Azure used by Bing or whatever.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)
User-agent: *
Disallow: /

tbh

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Easy. Add a section to your robots.txt file.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I don't really understand the reasoning behind doing any of this, they didn't give a fuck about stealing clearly copyrighted content in the first place, why would they care about you (not OP specifically) begging them not to steal your stuff. (As long as theres no laws about this which afaik there aren't).

[–] [email protected] 1 points 1 year ago (1 children)

So that leaves two options then. Leave the front door wide open, don't bother with any locks. Or shut down the web site. I'm for at least closing the door with the right robots.txt

[–] [email protected] 1 points 1 year ago

The analogy should be either having the door open or having the door open but putting a note on the door saying to please not steal anything. I'm not saying you shouldn't do it, I just don't think it's gonna do anything, so I'm not going to bother.

[–] [email protected] 1 points 1 year ago

Pehaps the user (or in this case the bot) will not go directly to your website, but first to some method of captcha verification or something like that, or like those pages (SteamDB for example) that do not open directly but first open a blank page to verify your network and browser with a captcha.