18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Dec 2025
18 points (100.0% liked)
TechTakes
2416 readers
76 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
It might have already been posted here, but this Wikipedia guide to recognizing AI slop is such a good resource.
A fairly good and nuanced guide. No magic silver-bullet shibboleths for us. I particularly like the observation that in reality LLMs are less likely to use unusual words, because they are statistically less likely.
Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say "revolutionary technology, future, here to stay", quickly run away if anyone tries to ask a question.
My feeling has gotten that I prefer the business executive empty vs the LLM empty, at least the first one usually expresses personality. It's never entirely empty.
Doing a quick search, it hasn't been posted here until now - thanks for dropping it.
In a similar vein, there's a guide to recognising AI-extruded music on Newgrounds, written by two of the site's Audio Moderators. This has been posted here before, but having every "slop tell guide" in one place is more convenient.
Man, this is why human labour still reigns supreme. It's such a small thing to consider the context in which these resources would be useful and to group together related resources as you have done here, but actions like this are how we can genuinely construct new meaning in the world. Even if we could completely eradicate hallucinations and nonspecific waffle in LLM output, they would still be woefully inept at this kind of task — they're not good at making new stuff, for obvious reasons.
TL;DR: I appreciate you grouping these resources together for convenience. It's the kind of mindful action that makes me think usefully about community building and positive online discourse.
It's also the sort of thing that you wouldn't actually think to ask for until it became quite hard to sort out. Creating this kind of list over time as good resources are found is much more practical and not the kind of thing would likely be automated.
Exactly! It's basically a form of social informational infrastructure building
archive link
https://web.archive.org/web/20250917164701/https://www.newgrounds.com/wiki/help-information/site-moderation/how-to-detect-ai-audio
Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.
This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the "Barnum-effect" comment (see this essay for what that refers to).