this post was submitted on 18 May 2024
714 points (89.9% liked)

Just Post

640 readers
90 users here now

Just post something ๐Ÿ’›

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 4 points 6 months ago (2 children)

Out of curiosity - do you think your opinion will change once on-device (i.e., power efficient) AI becomes the norm?

The capabilities and utility of contemporary LLMs are wildly overstated by many, but the claim that they are completely useless is dubious imo. Nothing they generate can be treated as fact (and shame on those who suggest you do), but I can say with certainty that it has made my life as an indie programmer much easier, and I know Iโ€™m not alone in that.

[โ€“] [email protected] 5 points 6 months ago (1 children)

Okay, sorry, here is my real response since I thought you were talking about something else due to being in two conversations at once in the same thread:

My opinion will change when AIs stop being untrustworthy. Until I can have any sort of certainty that it isn't just making shit up, it won't change.

Not too long ago, I asked ChatGPT to tell me who I am. I have a unique name. I also have a long-established internet media presence under that name. I'm not famous, but I've got enough prominence for it to know exactly who I am.

It had no idea whatsoever. It got it entirely wrong. It said I was a business entrepreneur who gave motivational lectures.

[โ€“] [email protected] 1 points 6 months ago (1 children)

idk bro that sounds like saying search engines aren't useful cuz you couldn't google yourself..

[โ€“] [email protected] 1 points 6 months ago* (last edited 6 months ago) (1 children)

Except I can Google myself. Links and photos come up. None of them say I'm a business entrepreneur who gives motivational lectures.

[โ€“] [email protected] 0 points 6 months ago (1 children)

I'm sorry to break this to you - but you probably weren't in the training dataset enough for the model to learn of your online presence - yes llms will currently hallucinate when they don't have enough data points (until they learn their own limitations) - but that's not a fundamentally unsolvable problem (not even top 10 i'd say)

there already are models that consider their knowledge and just apologize if they can't answer instead of making shit up. (eg claude)

[โ€“] [email protected] 2 points 6 months ago

Considering these LLMs are being integrated with search engines in a way that might work toward replacing them, don't you think their training should include knowing who someone is that a bunch of hits come up for when you Google them?