17
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 18 May 2025
17 points (100.0% liked)
TechTakes
2033 readers
242 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”
text in image
Can ChatGPT pick every 3rd letter in "umbrella"?You'd expect "b" and "I". Easy, right?
Nope. It will get it wrong.
Why? Because it doesn't see letters the way we do.
We see:
u-m-b-r-e-l-l-a
ChatGPT sees something like:
"umb" | "rell" | "a"
These are tokens — chunks of text that aren't always full words or letters.
So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.
Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.
Why does this matter?
Because the better we understand how LLMs think, the better results we'll get.
Well, its a perfect demonstration that LLMs flat-out do not think like us. Even a goddamn five-year old could work this shit out with flying colours.
Yeah exactly. Loving the dude's mental gymnastics to avoid the simplest answer and instead spin it into moralising about promptfondling more good
LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)