this post was submitted on 24 May 2025
1093 points (98.9% liked)

Science Memes

14649 readers
2853 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 26 minutes ago

How can i make something like this

[–] [email protected] 3 points 18 minutes ago (1 children)

Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it's still limited in bandwith they can occupy.

[–] [email protected] 2 points 2 minutes ago

They make one request per IP. Rate limit per IP does nothing.

[–] [email protected] 4 points 1 hour ago (1 children)

I'm imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, 'click here to prove you aren't a robot' and 'select all of the images that have a traffic light' , seem like child's play.

[–] [email protected] 2 points 24 minutes ago

All you need to protect data from ai is use non-http protocol, at least for now

[–] [email protected] 36 points 5 hours ago (3 children)

I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.

I don't really see an end to this madness. Such a huge waste of resources.

[–] [email protected] 3 points 41 minutes ago

the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.

Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.

Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.

[–] [email protected] 8 points 4 hours ago

there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I'm sure it's not about the money.

[–] [email protected] 2 points 4 hours ago

Madness is right. If only we didn't have to create these things to generate dollar.

[–] [email protected] 39 points 8 hours ago (2 children)

This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

It's not like I can afford to compete with OpenAI on bandwidth, and they're burning through money with no cares already.

[–] [email protected] 20 points 7 hours ago (1 children)

Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it's going to be indistinguishable from real large data sets.

[–] [email protected] 11 points 4 hours ago (1 children)

Imagine the staff meeting:

You: we didn't gather any data because it was poisoned

Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!

Boss: hmm who am I going to keep...

[–] [email protected] 5 points 4 hours ago* (last edited 4 hours ago)

The boss fires both, "replaces" them for AI, and tries to sell the corposhill's dataset to companies that make AIs that write generic fantasy novels

[–] [email protected] 0 points 3 hours ago* (last edited 3 hours ago)

You can compress multiple TB of nothing with the occasional meme down to a few MB.

[–] [email protected] 62 points 9 hours ago (1 children)

I’m so happy to see that ai poison is a thing

[–] [email protected] 6 points 3 hours ago

Don't be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.

[–] [email protected] 111 points 11 hours ago (10 children)

It's so sad we're burning coal and oil to generate heat and electricity for dumb shit like this.

[–] [email protected] 2 points 4 hours ago (1 children)

im sad governments dont realize this and regulate it.

[–] [email protected] 1 points 21 minutes ago

Of all the things governments should regulate, this is probably the least important and ineffective one.

load more comments (9 replies)
[–] [email protected] 163 points 13 hours ago (8 children)

Deployment of Nepenthes and also Anubis (both described as "the nuclear option") are not hate. It's self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin' survive thanks to these tools.

Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.

load more comments (8 replies)
[–] [email protected] 29 points 10 hours ago (1 children)

"Markov Babble" would make a great band name

[–] [email protected] 12 points 7 hours ago

Their best album was Infinite Maze.

load more comments
view more: next ›