18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Dec 2025
18 points (100.0% liked)
TechTakes
2429 readers
55 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.
This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the "Barnum-effect" comment (see this essay for what that refers to).