558
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

Just in case someone doesn't know, LLM in this case means "Large Language Model", which is just the technical term for things like ChatGPT.

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 2 points 1 year ago

It's like searching for a picture of Prague, seeing a drawing of Delhi, and then concluding you've been there. It's not about laziness. It's about accuracy.

[-] [email protected] 2 points 1 year ago

Yeah, we're not there yet, but the way things are going, I don't see it being THAT far off. Maybe within 5 years it'll be as accurate as anything else.

[-] [email protected] 2 points 1 year ago

Yes, I think if we can get an LLM to work while providing high quality, real world sources it will be a game changing technology across domains. As it stands though, it's like believing a magician really does magic. The tricks they employ are incredibly useful in a magic show, but if you expect them to really cast a fireball in your defense, you'll be sorely mistaken.

[-] [email protected] 1 points 1 year ago

Huh, I was under the opinion that ChatGPT cited it's sources. I know others do it.

[-] [email protected] 0 points 1 year ago* (last edited 1 year ago)

Did you read my comment at all? I was replying to a comment about the level of effort, which is what my analogy addresses.

Your hyperbole not withstanding, if the accuracy isnt good enough for you, dont use it. Lots of people find that LLMs are useful even in their current state of imperfect accuracy.

[-] [email protected] 1 points 1 year ago

Did you read mine? If you wanted a depiction of a city, it's more than good enough. In fact it's amazing what it can do in that respect. My point is: it gets major details wrong in a way that feels right. That's where the danger lies.

If your GPS consistently brought you to the wrong place, but you thought it was the right place, do you not think that might be a problem? No matter how many people found it useful, it could be dangerously wrong in some cases.

My worry is precisely because people find it so useful to "look things up", paired with the fact that it has a tendency to wildly construct 'information' that feels true. It's a real, serious problem that people need to understand when using it like that.

this post was submitted on 29 Jan 2024
558 points (97.8% liked)

ADHD memes

10553 readers
1134 users here now

ADHD Memes

The lighter side of ADHD


Rules

  1. No Party Pooping

Other ND communities

founded 2 years ago
MODERATORS