this post was submitted on 11 Sep 2023
104 points (68.7% liked)

Technology

59197 readers
2890 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 87 points 1 year ago (1 children)

"We asked a Chat Bot to solve a problem that already has a solution and it did ok."

[–] [email protected] 58 points 1 year ago (2 children)

to solve a problem that already has a solution

And whose solution was part of its training set...

[–] [email protected] 19 points 1 year ago* (last edited 1 year ago) (1 children)

half the time hallucinating something crazy in the in the mix.

Another funny: Yeah, it's perfect we just need to solve this small problem of it hallucinating.

Ahemm..... solving hallucinating is the "no it actually has to understand what it is doing" part aka the actual intelligence. The actually big and hard problem. The actual understanding of what it is asked to do and what solutions to that ask are sane, rational and workable. Understanding the problem and understanding the answer, excluding wrong answers. Actual analysis, understanding and intelligence.

[–] [email protected] 9 points 1 year ago (1 children)

Not only that, but the same variables that turn on "hallucination" are the ones that make it interesting.

By the very design of generative LLMs, the same knob that makes them unpredictable makes them invent "facts". If they're 100% predictable they're useless because they just regurgitate word for word something that was in the training data. But, as soon as they're not 100% predictable they generate word sequences in a way that humans interpret as lying or hallucinating.

So, you can't have a generative LLM that is both "creative" in that it comes up with a novel set of words, without also having "hallucinations".

[–] [email protected] 4 points 1 year ago (1 children)

the same knob that makes them unpredictable makes them invent “facts”.

This isn't what makes them invent facts, or at least not the only (or main?) reason. Fake references, for example, arise because it encounters references in text, so it knows what they look like and where they should be used. It just doesn't know what one is or that it's supposed to match up to something real which says what the text implies that it says.

[–] [email protected] 1 points 1 year ago (1 children)

so it knows what they look like and where they should be used

Right, and if it's set to a "strict" setting where it only ever uses the 100% perfect next word, if the words leading up to a reference are a match for a reference it has seen before it will spit out that specific reference from its training data. But, when it's set to be "creative", and predict words that are a good but not perfect match, it will spit out references that are plausible but don't exist.

So, if you want it to only use real references, you have to set it up to not be at all creative and always use the perfect next word. But, that setting isn't very interesting because it just word-for-word spits out whatever was in its training data. If you want it to be creative, it will "daydream" references that don't exist. The same knob controls both behaviours.

[–] [email protected] 1 points 1 year ago

That's not how it works at all. That's not even how references work.