82
all 17 comments
sorted by: hot top new old
[-] WatDabney@sopuli.xyz 19 points 3 months ago* (last edited 3 months ago)

Effectively, what LLMs do is exactly the same thing that mentalists do - they wait for "prompts" to indicate your area(s) of interest, then feed you strings of words that are statistically likely to be well received.

Or in much simpler terms, and by design, they tell you what you want to hear.

[-] Seminar2250@awful.systems 21 points 3 months ago* (last edited 3 months ago)

Baldur Bjarnason has a piece from July 2023 called The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con^[https://softwarecrisis.dev/letters/llmentalist/] that you might appreciate, if you haven't read it yet. :)

[-] WatDabney@sopuli.xyz 11 points 3 months ago

I did lift that basic concept from an article I read, and I would assume it was that one. Thanks for the link.

[-] Kolanaki@pawb.social 15 points 3 months ago

In reality it just makes you stupid, but faster.

[-] swlabr@awful.systems 27 points 3 months ago
[-] dgerard@awful.systems 19 points 3 months ago

I'M DOING 1000 CALCULATIONS PER SECOND AND THEY'RE ALL FP4

[-] ceenote@lemmy.world 11 points 3 months ago

Wtf I love AI now

[-] Angelevo@feddit.nl 5 points 3 months ago

We did not need 'AI' to get to this point.. People are freaking out over a mirror.

this post was submitted on 31 Oct 2025
82 points (98.8% liked)

TechTakes

2416 readers
178 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS