73
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]

Who are these people? This is ridiculous. :)

I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

People even have romantic relationships with these things.

I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

Very slippery slope if you ask me.

top 24 comments
sorted by: hot top new old
[-] [email protected] 13 points 1 day ago

ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

[-] [email protected] 6 points 4 hours ago

CGPT literally never gives up. You can give it an impossible problem to solve, and tell it you need to solve it, and it will never, ever stop trying. This is very dangerous for people who need to be told when to stop, or need to be disengaged with. CGPT will never disengage.

[-] [email protected] 6 points 1 day ago

indeed, though I could do without using disparaging language for one of the most vulnerable populations in the medical world.............

[-] [email protected] 12 points 1 day ago

I know a guy who has all kinds of theories about sentient life in the universe, but noone to talk to about them. It's because they're pretty obvious to anyone who took a philosophy class, and too out there for people who are not interestes in such discussions. I tried to be a conversation partner for him but it always ends up with awkward silence on my part and a monologue on his side at some point.

So, he finally found a sentient being who always knows what to answer in the form of ChatGPT and now they develop his ideas together. I don't think it's bad for him overall, but the last report I got from his conversations with the superbeing was that it told him to write a book about it because he's full of innovative ideas. I hope he lacks persistence to actually write one.

[-] [email protected] 4 points 1 day ago

ChatGPT is phenomenal at coming up with ideas to test out. Good critical thinking is necessary though… I’ve actually been able to make a lot of headway with a project that I’ve been working on, because when I get stuck emotionally, I can talk to chatgpt and it gets me through it because it knows how I think and work best. It’s scary how well it knows me… and I’m concerned about propoganda… but it’s everywhere.

[-] [email protected] 2 points 1 day ago

hi, they're going to be in psychosis regardless of what LLMs do. they aren't therapists and mustn't be treated as such. that goes for you too

[-] [email protected] 17 points 1 day ago

I dont agree with the argument that chat gpt should “push back”.

Me neither, but if they are being presented as "artificial people to chat with" they must.

I'd rather LLMs stay tools, not pretend people.

Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

Some of the LLMs referred to are advertised as AI psychological help, so they must either act like psychologists (which they can't) or stop being allowed as digital therapists.

[-] [email protected] 5 points 1 day ago
[-] [email protected] 1 points 2 hours ago

We were warned years ago!

[-] [email protected] 4 points 1 day ago

I use chatGPT to kind of organize and sift through some of my own thoughts. It’s helpful if you are working on something and need to inject a simple “what if” into the thought process. It’s honestly great and has at times pointed out things I completely overlooked.

But it also has a weird tendency to just agree with everything I saw just to keep engagement up. So even after I’m done, I’m still researching and challenging things anyway because it want me to be its friend. It’s very strange.

It’s a helpful tool but it’s not magical and honestly if it disappeared today I would be fine just going back to the before times.

[-] [email protected] 10 points 1 day ago

I mean, having it not help people commit suicide would be a good starting point for AI safety.

[-] [email protected] 3 points 1 day ago

These are the same people who Google stuff then believe every conspiracy theory website they find telling them the 5G waves mind control the pilots to release the chemtrails to top off the mind control fluoride in the water supplies.

They honestly think the AI is a sentient super intelligence instead of the Google 2 electric gargling boogaloo.

[-] [email protected] 1 points 1 day ago

being a sucker isn't the same as being in psychosis

[-] [email protected] 2 points 1 day ago* (last edited 1 day ago)

Yea totally happening as presented /s

[-] [email protected] 7 points 1 day ago

I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

but that’s an inherently unhealthy relationship, especially for psychologically vulnerable people. if it doesn’t push back they’re not in a relationship, they’re getting themselves thrown back at them.

[-] [email protected] 13 points 1 day ago

Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.

I don't think the problem is solvable if we keep treating the Speak'n'spell like it's participating in this.

Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we're already playing their shell game.

But yes, the tool seems primed for enabling self-harm.

[-] [email protected] -2 points 1 day ago

Like with every other thing there is: if you don't know how it basically works or what it even is, you maybe should not really use it. And especially not voice an opinion about it. Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn't put a screwdriver in your eyes. Just knowing what a plane does won't make you an able pilot and will likely result in dire harm too.

Not directed at you personally though.

[-] [email protected] 7 points 1 day ago

Agreed, for sure.

But if Costco modified their in-store sample booth policy and had their associates start offering free samples of bleach to children - when kids start drinking bleach we wouldn't blame the children; we wouldn't blame the bleach; we'd be mad at Costco.

[-] [email protected] 1 points 4 hours ago

Yes, but also no. Unmonitored(!) Children are a special case. Them being clueless and easy victims is inherent by design. You can't lay any blame on them, so they kinda make an unfair argument. Can't blame a blind person for not seeing you.

[-] [email protected] 1 points 2 minutes ago

You're saying people with underdeveloped mental and social skills are somehow never analagous in any way at all to children? There are full grown neurotypical and clinically healthy adults that are irresponsible enough to be analagous to children, but a literal case of someone trusting an untrustworthy authority due to a lapse of critical thinking skills … bears no resembalance at all to child-like behavior, at all?

Wow. That's kind of some ivory tower stuff right there.

this post was submitted on 19 Jul 2025
73 points (81.7% liked)

Technology

72988 readers
3410 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS