82
you are viewing a single comment's thread
view the rest of the comments
[-] WatDabney@sopuli.xyz 19 points 5 months ago* (last edited 5 months ago)

Effectively, what LLMs do is exactly the same thing that mentalists do - they wait for "prompts" to indicate your area(s) of interest, then feed you strings of words that are statistically likely to be well received.

Or in much simpler terms, and by design, they tell you what you want to hear.

[-] Seminar2250@awful.systems 21 points 5 months ago* (last edited 5 months ago)

Baldur Bjarnason has a piece from July 2023 called The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con^[https://softwarecrisis.dev/letters/llmentalist/] that you might appreciate, if you haven't read it yet. :)

[-] WatDabney@sopuli.xyz 11 points 5 months ago

I did lift that basic concept from an article I read, and I would assume it was that one. Thanks for the link.

this post was submitted on 31 Oct 2025
82 points (98.8% liked)

TechTakes

2557 readers
56 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS