449
you are viewing a single comment's thread
view the rest of the comments
[-] LodeMike@lemmy.today 22 points 6 days ago

Oh they're trying. Experiments have made so many mistakes.

[-] eatCasserole@lemmy.world 8 points 6 days ago

I saw a story recently where a guy spent some time with a customer service chatbot, and ended up convincing it to give him 80% off, and then ordered like $6000 of stuff.

LLMs just don't produce reliable/predictable output, it's much easier for the user to get them to go off the rails.

[-] LodeMike@lemmy.today 2 points 5 days ago

Aren't there also tons of studies and math that show/prove they cant differentiate between instructions (e.g. from the company) vs data (e.g. that guy's messages)?

[-] eatCasserole@lemmy.world 2 points 5 days ago

Yes, I believe that is the case.

Of course in any other application, keeping instructions and data separate is very important. Like an SQL injection attack is when you're able to sneak instructions in where data is supposed to go, and then you can just delete the entire database, if you want. But with LLMs the distinction doesn't exist.

this post was submitted on 26 Feb 2026
449 points (98.3% liked)

Fuck AI

6177 readers
2653 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS