13
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 Feb 2026
13 points (60.3% liked)
Technology
81869 readers
4951 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
So we're just throwing in the towel on what words mean now I guess. Anything can be a neuron.
But can anything be a H-NEURON?
Any data that makes AI people upset is an H-neuron. This includes both inaccurate responses, and accurate responses that the model designers were attempting to censor, such as "harmful" content.
Infuriatingly, the researchers actually insist that offensive material is not factual material.
no, they have to be the nodes responsible for the creation of hallucinations
And a "hallucination" is also an inaccurate humanization of the actual meaning: "statistical relationship that we AI folks don't like."
"Hallucinations" even include accurate data.
It is a trash marketing buzzword.
did you know that there is no sex going on in a Breeder Reactor?
https://en.wikipedia.org/wiki/Breeder_reactor
They're analogies to help us communicate ideas.
Nuclear energy companies aren’t trying to make people think that their reactors reproduce.
AI companies are trying to make people think that their software is intelligent.
The context matters.
A breeder reactor is creating something, which is like the outcome of breeding. That name fits.
a hallucination is seeing something that's not there, which also fits.
In AI, a "hallucination" is just as much "there" as a non-"hallucination." It's a way for scientists to stomp their foot and say that the wrong output is the computer's fault and not a natural consequence of how LLMs work.
Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.
It was a cute name early on, now it is used to deflect when the output is just plain wrong.
I don't think anyone is confusing radiation propagation with being alive though.
The issue is, these things "communicate" with us so granting it even more leeway to seem like it's thinking (it's not) is only further muddying how people perceive them
Hchicdfvhk!
https://en.wikipedia.org/wiki/Neural_network_(machine_learning)
it's a node in the system.
kind of like cpu is the brain of computer
and MITOCHONDRIA IS THE POWERHOUSE OF THE CELL