this post was submitted on 14 May 2025
203 points (98.1% liked)
A Boring Dystopia
12149 readers
1194 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is there any way to forcibly prevent a person from using a service like this, other than confiscating their devices?
You could try something like a network filter that is out of the control of the user (e.g. on the router or something like a Raspberry Pi running Pihole), but you'd probably have to curate the blocklist manually, unless somebody else has published an anti-LLM list somewhere. And of course, it will only be as effective as the user's ability to route around that blocklist dictates.
LLMs can also be run locally, so blocking all known network services that provide access still won't prevent a dedicated user talking to an AI.
If one's at the point where one runs local LLM's, I would assume one is smart enough to explore the capabilities (or lack thereof) pretty quickly.
Took me less than a week to probe various models myself, concluding with "anybody considering AI's to be oracles of objective truth have no contact with reality".
Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.
If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren't released until they aren't a threat to themselves or others. They are usually medicated and go through some sort of therapy.
The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it'll fix itself as it becomes smarter.
When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.
This sounds like a job for an AI shrink!
Currently no, if you are asking for suggestions maybe a black list like most countries have for gambling will be an option.
Of maybe just destroy all AI...