this post was submitted on 23 Sep 2024
532 points (98.2% liked)

Technology

58261 readers
3621 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 2 days ago* (last edited 2 days ago) (3 children)

yes it is, and it doesn't work.

edit: too expand, if you're generating data it's an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won't be in the set (because you didn't know about them, so the network never sees any)

[–] [email protected] 1 points 1 day ago

Microsoft's Dolphin and phi models have used this successfully, and there's some evidence that all newer models use big LLM's to produce synthetic data (Like when asked, answering it's ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

[–] [email protected] -1 points 2 days ago (1 children)

Alpaca is successfully doing this no?

[–] [email protected] 6 points 2 days ago (1 children)

from their own site:

Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

[–] [email protected] -2 points 2 days ago (1 children)

So does GPT 3 and 4, it's still in use and it's cheaper.

[–] [email protected] 4 points 2 days ago

yeah. what's your point. I said hallucinations are not a solvable problem with LLMs. You mentioned that alpaca used synthetic data successfully. By their own admissions, all the problems are still there. Some are worse.

[–] [email protected] -4 points 2 days ago* (last edited 2 days ago) (1 children)

It needs to be retrained on the responses it receives from it's conversation partner. It's previous output provides context for it's partner's responses.

It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite "you're wrong" feedback from it's partners, and it is instructed to minimize such feedback.

It is not (yet) developing true intelligence. It is simply learning to bias it's responses in such a way that it's audience doesn't immediately call it a liar.

[–] [email protected] 9 points 2 days ago (1 children)

Yeah that implies that the other network(s) can tell right from wrong. Which they can't. Because if they did the problem wouldn't need solving.