this post was submitted on 12 Oct 2023
3 points (80.0% liked)

AI

4141 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
 

Dear all!

As I am quite new to all this, maybe a very noob question. I prompted bing, bard and chatgpt (3.5) with the same question. Bing just straight up answered different questions but delivered sources I could check. Bard and chatgpt answered my questions but just invented (all) sources. Just made up randomised authors and title names. Bard delivered links to said scientific articles, but when you followed the link the articles in question were completely different.

  1. How can you I trust delivered results, when the sources are made up?

  2. And also: why? Why didn't it say for example there are no meta-analyses?

  3. is it better in the payed version from chatgpt?

Thanks in advance!

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago (1 children)

LLM's like the AI's you mentioned generally are just really good at predicting the next word. For example, Given an input like "My dog likes" an AI may add the word "treats" to the end. They are so good at predicting the next word that they will write paragraphs that sound entirely human.

So, when they give you strange links, and made up names it's probably because it just thought that stuff sounded good.

You can only trust results if you verify them yourself.

I am not sure if the paid versions are better.

[–] [email protected] 2 points 1 year ago

Thanks!

The thing is, people tend to take the results given as a facts. And if they have no means to check the sources (or dont bother or care) this whole ai might become a really disinformation circus.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

https://machinelearningmastery.com/a-gentle-introduction-to-hallucinations-in-large-language-models/

In short: Language models are not search engines or databases. They make up text. Hallucinations are unavoidable.

You can't trust them. This is still matter of research. Maybe it'll get better in a few years once researchers found good ways to mitigate this.

[–] [email protected] 1 points 1 year ago (1 children)

Awesome, screened trough the wiki page. Need to delve into this!

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

AI is super fascinating. Especially those Large Language Models (LLM) that became fashionable in recent times. You can read more about running them yourself (with a decent computer) and tinkering around on these two other lemmy communities I like:

[–] [email protected] 2 points 1 year ago

Perplexity.ai has always provided sources and even has filters you can select by clicking the focus button. I dig it for academics as there's an academic filter plus all sources provided I trust such as PubMed, NIH, SemanticScholar, and NCBI. I also really dig there no account needed, unless you want to implement GPT-4. At times I've had to ask for additional details on the answer provided without GPT, but up to this point its still by far my favorite AI to utilize.