18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Dec 2025
18 points (100.0% liked)
TechTakes
2425 readers
73 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
What, that doesnt even make sense. I dont type what im thinking. (My only real usage of llms was trying to break them, so imagine the output of that. And you could then imagine what you think I was trying to attempt in messing with the system. But then you would do the work. Say you see mee send the same message twice. A logical conclusion would be that I tried to see if it gave different results if prompted the same way twice. However, it is more likely I just made a copy paste error and accidentally send the wrong text the second time. So the person reading the logs is doing a lot of work here). Ignoring all that he also didnt think of the next case: people using an llm to fake chatlogs to optimize being hired. Good way to hire a lot of North Koreans.