391
submitted 3 days ago by Beep@lemmus.org to c/technology@lemmy.world
top 50 comments
sorted by: hot top new old
[-] Unpigged@lemmy.dbzer0.com 12 points 1 day ago

It's worrying how often I see news like that where they elaborate on human traits like acceptance and "understanding" of the model.

Could it be that our society had disconnected from emotion so far that any synthetic simulacra of a real compassion makes vulnerable people swallow it bait, line and sinker?

[-] UltraBlack@lemmy.world 7 points 1 day ago

Fucking idiots

[-] phoenixz@lemmy.ca 47 points 2 days ago

OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

Buuuuulshit

Open AI needs people to be as addicted as possible as it uses the Facebook model of business only with N times the investment behind it so it needs users to use more at any cost, and these CEO's being the psychopaths that they are, they don't give a shit about things like consequences

[-] UnderpantsWeevil@lemmy.world 5 points 2 days ago

Buuuuulshit

I mean, what are the odds that the statement was composed by an AI?

[-] PhoenixDog@lemmy.world 15 points 2 days ago

This is like any matchmaking app genuinely attempting to match you with "the one" through AI, algorithms, science, etc so when you meet the perfect person you stop giving the app money.

I got lucky and married my fuck buddy that I met on Tinder. But that is not a good business plan. Why would OpenAI drive people to stop using their product.

I'm a functional alcoholic. Last I checked booze companies aren't reaching out to me to stop buying booze because they care about my personal health or mental wellbeing....

load more comments (1 replies)
[-] JATtho@lemmy.world 3 points 1 day ago

I have recently realized that a sum negative knowledge situation can exist, and this is a thing with "AI". The work the AI does may actually reduce the useful knowledge. It's like you have built a working fusion reactor, but have zero knowledge how to replicate it or able to explain why it works.

The point this happens to a person, means she/he can't be trusted with the tech and should stay far away from it.

The negative knowledge pit can be so deep that some people are unable to escape from it, and start confidently believing in the (AI injected) garbage like it's their own thoughts...

[-] Triumph@fedia.io 124 points 3 days ago

This only demonstrates how easily manipulated very many people are.

[-] floofloof@lemmy.ca 80 points 3 days ago* (last edited 3 days ago)

Previously they would have had to encounter a person who wanted to manipulate them. Now there's a widely marketed technology that will reliably chew these vulnerable people up.

[-] Steve@startrek.website 59 points 3 days ago

Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

load more comments (4 replies)
load more comments (5 replies)
load more comments (4 replies)
[-] MountingSuspicion@reddthat.com 112 points 3 days ago

Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

Another case from the article:

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

[-] shinratdr@lemmy.ca 33 points 3 days ago

I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.

[-] criss_cross@lemmy.world 4 points 2 days ago

Some big “No hallucinations” vibes coming here.

Some people really think skills etc are golden laws that can’t be broken. Rather they’re minor suggestions that an LLM will happily throw out as like you said it doesn’t understand words.

[-] scytale@piefed.zip 51 points 3 days ago

There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

[-] SchwertImStein@lemmy.dbzer0.com 19 points 3 days ago* (last edited 3 days ago)

lmao "core rules that cannot be overwritten" that not how llms work

EDIT: oh, yeah you said the same thing

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

I can fix her...

load more comments (2 replies)
[-] lmmarsano@group.lt 23 points 3 days ago
[-] AstralPath@lemmy.ca 9 points 2 days ago

I learned it as "PEBKAC". Problem exists between keyboard and chair. PICNIC is nice too though.

[-] Quazatron@lemmy.world 2 points 1 day ago

Layer 8 issue.

[-] lost_faith@lemmy.ca 2 points 2 days ago

So much nicer than the issue is between the keyboard and the chair or an I/O error

load more comments (1 replies)
[-] CTDummy@aussie.zone 73 points 3 days ago

He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

“It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

[-] porcoesphino@mander.xyz 27 points 3 days ago

Agreed, but I think it's also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that's a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren't necessarily helpful

[-] Aatube@lemmy.dbzer0.com 11 points 3 days ago

mental healthcare field being practically non-existent in most countries

I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

load more comments (2 replies)
load more comments (2 replies)
[-] FosterMolasses@leminal.space 22 points 3 days ago

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say "Give me a list of types of X, but exclude Y"

"Understood!

#1 - Y

(I know you said to exclude this one but it's a popular option among-)"

lmfaoooo

[-] very_well_lost@lemmy.world 14 points 3 days ago

That's because it isn't true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of 'fine-tuning' a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any 'memory' or 'learning' that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

-You have a conversation with a model.

-Your conversation is saved into a database with all of the other conversations you've had. Often, an LLM will be used to 'summarize' your conversation before it's stored, causing some details and context to be lost.

-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

[-] OctopusNemeses@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

It makes more sense when viewed as a fancy autocomplete, not an intelligence. There's no intelligence behind it that is reading your statement and understanding your meaning. It's responding with text that mathematically likely matches some sort of reply that would fit your statement.

Your statement included Y and the algorithm landed on result that includes Y. There's no intelligence that could understand that you meant no Y.

That bullshit about the model getting fine tuned just means they are data mining you. It doesn't make the more LLM intelligent. All it does is add your data to their dataset of which the LLM can draw from for possible future replies. The fundamental limitations of the technology still exists.

[-] phoenixz@lemmy.ca 7 points 2 days ago

I've experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so "whose a good boy!!!!" annoying.

People don't talk like these chatbots do, their training data that was stolen from humanity definitely doesn't contain that, that is "behavior" included by the providers to try and make sure that people get as hooked as possible

Gotta make back those billions of investments on a dead end technology somehow

[-] Kuma@lemmy.world 19 points 3 days ago

I think this is both scary and very interesting. What kind of person do you have to be to become addicted like them? Is this the same as gambling addiction? Do you need a type of gene? Would this type of personality be receptive to hypnotize, cult, delusions about their idol and so on? Or is this something that can happen to anyone who is depressed and feel lonely? How did the llm even earn enough trust? In a cult is there a lot of ppl reaffirming so it is a lot easier to understand.

It is so hard to understand even tho I really want to. I have never cared about an object or idol/celebrate. AI can I never even take serious as a living beeing, the only emotion it triggers are frustration and how you feel about a tool that works as it should, so pretty apathetic. Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

A lot of questions that I do not think anyone here can answer haha, but maybe one of them.

[-] UnderpantsWeevil@lemmy.world 5 points 2 days ago

What kind of person do you have to be to become addicted like them?

Human cognition degrades with stress, exhaustion, and trauma. If you're in a position where turning to an AI for relationship advice seems like a good idea, you're probably already suffering from one or more of the above.

Also doesn't help that AIs are sycophantic precisely because sycophancy is addictive. This isn't a "type of person" so much as a "tool engineered towards chronic use". It's like asking "What kind of person regularly smokes crack?"

Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

I'll give you a personal example. I have a friend who is currently pregnant and going through a bad breakup with her baby-daddy. She's a trial lawyer by trade - very smart, very motivated, very well-to-do, but also horribly overworked, living by herself, and suffering from all the biochemical consequences of turning a single celled organism into a human being.

As a result of some poorly conceived remarks, she's alienated herself from a number of close friends to the point where we doubt there's going to be a baby shower. Part of the impulse to say these things came from her own drama. But part if it came from her discovering ChatGPT as a tool to analyze other people's statements. This has created a vicious behavioral spiral, during which she says something regrettable and gets a regrettable response in turn. She plugs the conversation into ChatGPT, because she has nobody else to talk to. And ChatGPT feeds her some self-affirming bullshit that inflates her ego far enough to say another stupid thing.

To complicate matters, her baby daddy is also using ChatGPT to analyze her conversations. And he's decided she's cheated on him, the baby isn't his, and she's plotting to scam him.

So now you've got two people - already stressed and exhausted - getting fed a series of toxic delusions by a machine that is constantly reaffirming in the way none of your friends or family are. It's compounding your misery, which drives anxiety and sends you back to the machine that offers temporary relief. But the advice from the machine yields more misery down the line, raising your anxiety, and sending you back to the machine.

What's producing this feedback loop? You could argue it is the individual, foolish enough to engage with the machine to begin with. But that's far more circumstantial than personality driven. If my friend didn't have a cell phone, she wouldn't be reaching for ChatGPT. If she wasn't pregnant, she wouldn't be so stressed and anxious. If she wasn't in a fight with her boyfriend, she wouldn't be feeding conversations into the prompt engine.

[-] Kuma@lemmy.world 2 points 2 days ago

Thanks for giving me a real life example.

I still find it hard to understand the emotional attachment to LLMs and why people believe their ideas (like the guy in the article). But I find her story to be a lot more understanding. It adds another layer, and it made me think.

It sounds like she is too overworked and stressed to make decisions or even think for herself, so she lets GPT do it for her. I assume it works most of the time and is a big help for many things that the baby daddy could had helped with instead if they were still a happy couple. I assume the biggest drive to use it is so she can turn off her brain. Which is why she has become dependent on the only stable and consistent thing in her life (that is my assumption about how she feels). Maybe that’s mostly how it goes, starts with using it as a tool and then you get lazy (for lack of a better term) and it keeps snowballing from there.

I feel for everyone involved. I hope she gets better soon, and I hope you do too, being overworked and stressed really destroys you and the people around you in many ways.

[-] UnderpantsWeevil@lemmy.world 2 points 1 day ago

I still find it hard to understand the emotional attachment to LLMs and why people believe their ideas

It's a conversation you're having on the internet with an agent that sounds like a human. People get invested for the same reason they get catfished.

It sounds like she is too overworked and stressed to make decisions or even think for herself, so she lets GPT do it for her.

That's the nut of it. And ChatGPT tends to mix the pastiche of a well-researched argument with the kind of feel-good self-affirmations that win over their audience. So you're getting what looks - at first glance - to be good advice. And then you're getting glazed on top of it. And then it's designed to tell you what you want to hear, so you're getting affirmation bias.

I hope she gets better soon, and I hope you do too, being overworked and stressed really destroys you and the people around you in many ways.

I mean, that's why human-to-human interactions are valuable. But it's also why they're difficult. Like any good medicine, it can taste bitter up front even if its what you need in the long run.

[-] Kuma@lemmy.world 2 points 1 day ago

100%! That is why I always set it as my top priority to say yes to friends and family (as long as it is reasonable) or do spontaneous things with them even when I do not feel like doing anything that day. And some friends are really hard to schedule anything with because of life so you need to take the chance when you get it haha.

I feel the best when I am with the ppl I care about, covid really showed me that. So I do understand why some who do not have friends or family may create some kind of unhealthy relationship with GPT just like some create unhealthy, even obsessive parasocial relationships with youtubers.

I have tried talking to GPT as a person but it feels extremely uncomfortable and hollow. With a human do I get stimulation, like knowledge, they challenge my view or ideas and give me different perspectives, I feel that really helps me understand the world better and I miss all of that from GPT, it isn't even creative and can not inspire me with new ideas but maybe that is a good thing if ppl tend to follow its instructions.

Do you talk to it? Other than giving it tasks.

[-] chunes@lemmy.world 8 points 2 days ago
[-] Kuma@lemmy.world 3 points 2 days ago

Wow, that is a big mix of anime isekai, vegetarian, delusions and religion/spiritual ideas, in a very dystopic way.

load more comments (3 replies)
load more comments (3 replies)
[-] SaveTheTuaHawk@lemmy.ca 2 points 1 day ago

I just looked at the Grok interface...an animated cartoon of a teenage girl, seriously?

[-] devolution@lemmy.world 43 points 3 days ago* (last edited 3 days ago)
[-] Internetexplorer@lemmy.world 1 points 10 hours ago

I don't think you understand how cancer works. Did you mother drop you on your head as baby?

[-] Trex202@lemmy.world 42 points 3 days ago

The billionaires are the cancer. AI is just the newest tool for humanity's self-destruction

load more comments (1 replies)
load more comments (5 replies)
[-] CompactFlax@discuss.tchncs.de 37 points 3 days ago

It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

load more comments (9 replies)
[-] SeductiveTortoise@piefed.social 29 points 3 days ago

No really, we should pour more money into this. Such a good idea 🫩

It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.

[-] captainlezbian@lemmy.world 4 points 1 day ago

I've been offered free blow before, but never by a dealer, just a generous person who was doing bumps

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 28 Mar 2026
391 points (97.3% liked)

Technology

83251 readers
3517 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS