34
submitted 2 days ago by [email protected] to c/[email protected]

Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.

The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way--whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like "the user is very astute" and it feels good to read that as someone who is socially isolated and is never complimented because of that.

I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 17 points 2 days ago

Deepseek will literally think in its reasoning sometimes "Well what they said is incorrect, but i need to make sure i approach this delicately as to not upset the user" and stuff. You can mitigate it a bit by just literally telling it to be straight forward and correct things when needed but still not entirely.

LLMs will literally detect where you are from via the words you use. Like they can tell if your American, British, or Australian, or if your someone whose 2nd lang is english within a few sentences. Then they will tailor their answers to what they think someone of that nationality would want to hear lol.

I think it's a result of them being trained to be very nice and personable customer servicey things. They basically act the way your boss wants you to act if you work customer service.

[-] [email protected] 14 points 2 days ago

Something related that I forgot to mention. ChatGPT builds a profile of you as you talk to it. I feel Deepseek does not do this and I assume stuff like Claude do this too. So it ends up knowing more about you than you know and in the case of these breakdown probably fuels the user's problematic behaviours.

[-] [email protected] 3 points 1 day ago

Oh yeah I've had to tell ChatGPT to stop bringing up shit from other chats before. Like if something seems related to another chat it'll start referencing it. As if i didnt just make a new chat for a reason. The worst part is the more you talk to them the more they hallucinate so a fresh new chat is the best way to go about things usually. ChatGPT seems to be worse at hallucinating these days than DeepSeek probably for this reason. New chats arent actually clean slates.

this post was submitted on 01 Jul 2025
34 points (97.2% liked)

Technology

1148 readers
48 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS