95
submitted 4 days ago by [email protected] to c/[email protected]

Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

top 50 comments
sorted by: hot top new old
[-] [email protected] 111 points 4 days ago* (last edited 4 days ago)

I feel like I woke up in the stupidest timeline where climate change is about to kill us, we decide stupidly to 10x our power needs by shoving LLMs down everyone’s throats, and the only solution to stay private is to 10x our personal LLM usage by generating tons of noise about us just to stay private. So now we’re 100x ing everyone’s power usage and we’re going to die even sooner.

I think your idea is interesting – I was also thinking that same thing awhile back – but how tf did we get here.

[-] [email protected] 36 points 4 days ago

but how tf did we get here

With capitalistic gusto! 🤮

load more comments (1 replies)
[-] [email protected] 5 points 4 days ago* (last edited 4 days ago)

Yeah agreed. What's going on in my state of Pennsylvania is they're reopening the Three Mile Island nuclear plant out near Harrisburg for the sole reason of powering Microsoft's AI data centers. This will be Unit 1 which was closed in 2019. Unit 2 was the one that was permanently closed after the meltdown in 1979.

I'm all for nuclear power. I think it's our best option for an alternative energy source. But the only reason they're opening the plant again is because our grid can't keep up with AI. I believe the data centers is the only thing the nuke plant will power.

I've also seen the scale of things in my work in terms of power demands. I'm an industrial electrical technician, and part of our business is the control panels for cooling the server racks for Amazon data centers. They just keep buying more more and more of them, projected til at least 2035 right now. All these big tech companies are totally revamping everything for AI. Like before a typical rack section might have drawn let's say 1000 watts, now it's more like 10,000 watts. Again, just for AI.

[-] [email protected] 1 points 1 day ago

I wasn't aware of the magnitude. That they are reopening old nuclear plant for the sole purpose of powering AI Data Centers ....

[-] [email protected] 2 points 4 days ago

Totally agree nuclear is a great tool but totally being used for the wrong purpose here. Use those power plants to solve our existing energy crisis before you crate an even bigger energy crisis.

[-] [email protected] 4 points 4 days ago

There are ais that can detect use of ai. This is a losing strategy as we burn resources playing cat and mouse.

As with all things greed is at the root of this problem. Until privacy has any legislative teeth, it will continue to be a notion for the few and an elusive one at that.

[-] [email protected] 19 points 4 days ago

Obscuration is what you're thinking and it works with things like adnauseun (firefox add on that will click all ads in the background to obscure preference data). It's a nice way to smear the data and probably better to do sooner (while the data collection is in infancy) rather than later (where the companies may be able to filter obscuration attempts).

I like it. I am really not a fan of being profiled, collected, and categorized. I agree with others, I hate this time line. It's so uncanny.

[-] [email protected] 2 points 4 days ago

I still don't really understand adnauseum. What is the difference in privacy compared to clicking on none of the ads?

[-] [email protected] 2 points 4 days ago

Whatever data profile they already have on your can be obscured to make it useless vs them probably trickling in data.

Think of it like um...

Having a picture of you with a moderate amount of notes that are accurate, vs having a picture of you with so much irrelevant/inaccurate data that you can't be certain of anything.

[-] [email protected] 5 points 4 days ago* (last edited 4 days ago)

But the picture of me they have is: doesn't click ads like all the other adblocker people (which is accurate)

Why would I want to change it to: clicks ALL the ads like all the other adnauseum people (which is also accurate)

[-] [email protected] 2 points 4 days ago

They build this picture from many other sources besides ad clicks, so the point is to obscure that. Problem is, if you're only obscuring your ad click behavior, it should be relatively easy to filter out of the model.

load more comments (6 replies)
load more comments (1 replies)
[-] [email protected] 15 points 4 days ago

It's a good idea in theory, but it's a challenging concept to have to explain to immigration officials at the airport.

[-] [email protected] 4 points 4 days ago* (last edited 4 days ago)

"it says here you clicked 'sign me up for ISIS' 10000 times?"

"Haha no officer, you see it was my social chaff AI that clicked it"

[-] [email protected] 1 points 2 days ago

Remembered an article of how a hacker tried to fidget with road cameras with licence plate NULL but for some reason have all the tickets sent to his home.

In the end he got tired and sold the car.

[-] [email protected] 9 points 4 days ago* (last edited 3 days ago)

This is a dangerous proposition.

When the dictatorship comes after you, they're not concerned about the whole of every article that was written about you All they care about are the things they see as incriminating.

You could literally take a spell check dictionary list, pull three words out of the list at random and feed it into a ollama asking for a story with your name that included the three words as major points in the story.

Even on a relatively old video card, you could probably crap out three stories a minute. Have it write them in HTML and publish the site map into major search engines on a regular basis.

EDIT: OK this was too fun not to do it real quick!

~ cat generate.py

import random
import requests
import json
import time
from datetime import datetime

ollama_url = "http://127.1:11434/api/generate"
wordlist_file = "words.txt"

with open(wordlist_file, 'r') as file:
    words = [line.strip() for line in file if line.strip()]

selected_words = random.sample(words, 3)
theme = ", ".join(selected_words)

prompt = f"Write a short, imaginative story about a person named Rumba using these three theme words: {theme}. The first word is their super power, the second word is their kyptonite, the third word is the name of their adversary.  Return only the story as HTML content ready to be saved and viewed in a browser."

response = requests.post(
    ollama_url,
    headers={"Content-Type": "application/json"},
    data=json.dumps({"model": "llama3.2","prompt": prompt})
)

story_html = ""
for line in response.iter_lines(decode_unicode=True):
    if line.strip():
        try:
            chunk = json.loads(line)
            story_html += chunk.get("response", "")
        except json.JSONDecodeError as e:
            print(f"JSON decode error: {e}")



timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"story_{timestamp}.html"

with open(filename, "w", encoding="utf-8") as file:
    file.write(story_html)

print(f"Story saved as {filename}")



~ cat story_20250630_130846.html

<!DOCTYPE html>
<html>
<head>
<title>Rumba's Urban Adventure</title>
<meta charset="UTF-8">
<style>
body {font-family: Arial, sans-serif;}
</style>
</head>
<body>

<h1>Rumba's Urban Adventure</h1>

<p>Rumba was a master of <b>slangs</b>, able to effortlessly weave in and out of conversations with ease. Her superpower allowed her to manipulate language itself, bending words to her will. With a flick of her wrist, she could turn a phrase into a spell.</p>

<p>But Rumba's greatest weakness was her love of <b>bungos</b>. The more she indulged in these sweet treats, the more her powers wavered. She would often find herself lost in thought, her mind clouded by the sugary rush of bungos. Her enemies knew this vulnerability all too well.</p>

<p>Enter <b>Carbarn</b>, a villainous mastermind with a personal vendetta against Rumba. Carbarn had spent years studying the art of linguistic manipulation, and he was determined to exploit Rumba's weakness for his own gain. With a wave of his hand, he summoned a cloud of bungos, sending Rumba stumbling.</p>

<p>But Rumba refused to give up. She focused her mind, channeling the power of slangs into a counterattack. The air was filled with words, swirling and eddying as she battled Carbarn's minions. In the end, it was just Rumba and Carbarn face-to-face.</p>

<p>The two enemies clashed in a spectacular display of linguistic fury. Words flew back and forth, each one landing with precision and deadliness. But Rumba had one final trick up her sleeve - a bungo-free zone.</p>

<p>With a burst of creative energy, Rumba created a bubble of pure slangs around herself, shielding her from Carbarn's attacks. The villain let out a defeated sigh as his plan was foiled once again. And Rumba walked away, victorious, with a bag of bungos stashed safely in her pocket.</p>

</body>
</html>

Interesting that it chose female rather than male or gender neutral. Not that I'm complaining, but I expected it to be biased :)

[-] [email protected] 3 points 3 days ago

Yup, you'd be surprised what you can accomplish with 10gb of VRAM and a 12b model. Hell, my profile pic (which isn't very good, tbf) was made on that 10gb VRAM card using localhosted stable diffusion. I hate big corp AI, but I absolutely love open market and open source local models. Gonna be a shame when they start to police them.

To OP: The problem is that they're looking for keywords. With the amount of people under surveillance these days, they don't give a rat's ass if you went to your favorite coffee roasting site, they want to find the stuff they don't want you to do.

Piracy? You're on a list. Any cleaning chemical that can be related to the construction of explosives? You're on a list. These lists will then tack on more keywords that pertain to that list. For example, the explosives list will then search for matching components bought within a close span of time that would indicate you're making them. Even searching for ways to enforce your privacy just makes them more interested.

So then you put out a bunch of fake data. This data happens to say you viewed a page pertaining that matching component. Whelp, that list just got hotter and now there are even more eyes on you and they're being slightly more attentive this time. Its a bad idea. The only way you're getting out of surveillance, at least online, is to never go online.

In reality, they probably won't even do anything about the above. What they really want is money. Money for your info; money to sell more things to you. They want the average home to be filled with advertisements tailored from your information. Because those adverts make those companies money, which they then use to buy more information to monetize your existence. Its the largest pyramid scheme known to humanity, and we're the unpaid grunts.

The moment the world became connected through telephones, cable TV, and then internet this scheme was already in motion way beforehand. Let's be honest, smartphones were the motherload. A TV, phone, and computer you always keep on you? They were salivating that day.

[-] [email protected] 8 points 4 days ago* (last edited 4 days ago)

This strategy of generating fake data just doesn't work well. It requires a ton of resources to generate fake data that can't be easily filtered which ends up making the strategy non viable on most situations. Look at Mullvads DAITA and how it constantly has to be improved to fight this and, that's just for basic protection.

There is a bit of a cognitive dissonance that goes on, where people seem to understand that you are tracked constantly online and offline through all sorts of complex means but still think relatively mundane solutions could break that system.

[-] [email protected] 11 points 4 days ago* (last edited 4 days ago)

I don't know if there's a clean way to do this right now, but I'd love to see a software project dedicated to doing this. Once a data set is poisoned it becomes very difficult to un-poison. The companies would probably implement some semi-effective but heavy-handed means of defending against it if it actually affected them, but I'm all for making them pay for that arms race.

[-] [email protected] 9 points 4 days ago

I have been a longtime advocate of data poisoning. Especially in the case of surveillance pricing. Unfortunately there doesn't seem to be many tools for this outside of AdNauseum.

[-] [email protected] 8 points 4 days ago

In a different direction now is a good time to start looking at how local AI can liberate us from big tech.

[-] [email protected] 2 points 4 days ago

Local AI requires Investments in local compute power which sadly is not affordable for private users. We would need some entity that we can trust to host. I am happy to pay for that

[-] [email protected] 5 points 4 days ago

This isn’t a very smart idea.

People trying to obfuscate their actions would suddenly have massive associated datasets of actions to sift through and it would be trivial to distinguish between the browsing behaviors of a person and a bot.

Someone else said this is like chaff or flare anti missile defense and that’s a good analog. Anti missile defenses like that are deployed when the target recognizes a danger and sees an opportunity to confuse that danger temporarily. They’re used in conjunction with maneuvering and other flight techniques to maximize the potential of avoiding certain death, not constantly once the operator comes in contact with an opponent.

On a more philosophical tip, the masters tools cannot be turned against him.

[-] [email protected] 2 points 4 days ago* (last edited 4 days ago)

I still think I can turn it against it

[-] [email protected] 5 points 4 days ago

spray-bottle

No, you can’t.

You are not the hero, effortlessly weaving down the highway between minivans on your 1300cc motorcycle, katana strapped across your back, using dual handlebar mounted twiddler boards to hack the multiverse.

If ai driven agentic systems were used to obfuscate a persons interactions online then the fact that they were using those systems would become incredibly obvious and provide a trove of information that could be easily used to locate and document what that person was doing.

But let’s assume what the op did worked, and no one could tell the difference.

That would be worse! Suddenly there’s hundreds of thousands of data points that could be linked to you and all that’s needed for a warrant are two or three that could be interpreted as probable cause of a crime!

You thought you were helping yourself out by turning the fuzzer on before reading trot pamphlets hosted on marxists.org but now they have an expressed interest in drain cleaner and glitter bombs and best case scenario you gotta adopt a new pitt mix from the humane society.

[-] [email protected] 5 points 4 days ago

So, she is talking about an AI-war? Where those who don't want us to be private, controls the weapons? Anyone else see a problem with that logic?

Thousands of "you" browsing different sites, will use an obscene amount of power and bandwidth. Imagine a million people doing that, not a billion... That's just stupid in all kinds of ways.

[-] [email protected] 5 points 4 days ago

First, Naomi and her team are doing a fantastic work in security for masses, easily top 5 worldwide!

AI is capable but we are still failing at program it properly, gosh, even well funded companies are still doing a poor job at it... (just look at the misplaced ads and ineffective we still get.)

What I want, and it is easy to do TODAY, is AI checking our FOSS... so many we use and just a tiny, tiny minority of it goes with some scrutiny. We need AI to go through the FOSS code looking for maliciousness now.

[-] [email protected] 4 points 4 days ago

It’s an interesting concept, but I’m not sure the payoff justifies the effort.

Even with AI-generated noise, you’re still being tracked through logins, device fingerprints, and other signals. And in the process, you would probably end up degrading your own experience; getting irrelevant ads, broken recommendations, or tripping security systems.

There’s also the environmental cost to consider. If enough people ran decoy traffic 24/7, the energy use could become significant. All for a strategy that platforms would likely adapt to pretty quickly.

I get the appeal, but I wonder if the practical downsides outweigh the potential privacy gains.

[-] [email protected] 6 points 4 days ago* (last edited 4 days ago)

getting irrelevant ads

you guys are getting ads?

[-] [email protected] 2 points 4 days ago

My entire family is ad free for years... with the exception in podcasts. I am tempted to block them too (is there a way now?) but still not too intrusive... it is a way for me to keep connected to the ad world anyways. Now, the moment they abuse them here tii... I'll find a way to block these.

[-] [email protected] 2 points 4 days ago

I’m not, but OP would if they started opening up their IP and fingerprints to anyone who wants them, in order to inundate those parties with garbage data. Admittedly, I might be missing some clever part of their plan.

[-] [email protected] 4 points 4 days ago

Getting more targeted ads is not really in your interest. That is an idea promoted by the ad people.

load more comments (3 replies)
[-] [email protected] 2 points 4 days ago

No clever plan. Just picked up this idea and like to see different opinions from people maybe far more advanced in that field

[-] [email protected] 2 points 4 days ago

Okay but irrelevant ads is the dream. I'd prefer not to get recommendations at all either. I'll hear from word of mouth what's worthwhile to watch, or I'll look for it myself. Recommendations consistently muddy things up, it makes all modern social media useless, I have no idea how people can put up with it.

[-] [email protected] 1 points 4 days ago

I agree, which is why this approach to me seems ultimately counterproductive on an individual level.

[-] [email protected] 2 points 4 days ago

There’s plenty of tools already, that can create many profiles of you, each with complete different personalities and posts.

[-] [email protected] 2 points 4 days ago

Do you have a link to the talk? I looked through her youtube and didn't see anything that quite matched this topic.

[-] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Hi it was a podcast from David Bombal with Naomi. I think it was called: top privacy tools in 2025. It is at the very end of the podcast

[-] [email protected] 1 points 1 day ago
load more comments
view more: next ›
this post was submitted on 29 Jun 2025
95 points (89.9% liked)

Privacy

39506 readers
705 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS