view the rest of the comments
Linux
Welcome to c/linux!
Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!
Rules:
-
Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.
-
Be respectful: Treat fellow community members with respect and courtesy.
-
Quality over quantity: Share informative and thought-provoking content.
-
No spam or self-promotion: Avoid excessive self-promotion or spamming.
-
No NSFW adult content
-
Follow general lemmy guidelines.
This is always what I don't understand about using ai in it's current form. If you can't know if it's right or wrong, and have to double check it, why use it in the first place? Would it not be more efficient and easier to just use the couple of petaflops you have in your own head to solve the problem or write that email?
I think then, that it is more of a novelty that has yet to ware off for some people and is conisistently buoyed by the ceos that push it.
"If you can't trust that a friend solved a sudoku puzzle for you without checking it first, why even bother?"
The obvious answer being that it's much easier to check the solution to a sudoku puzzle than it is to solve it yourself. If you have reasonable means to check compared to going out and starting from scratch, then even a modest enough rate of correct answers can save a ton of time. LLMs don't have that for me, but that's also because I've been doing research as a hobby for 10 years.
If you know anything about computation theory, there's an entire class of problems for which checking a solution is (relatively) trivial but finding a correct one is highly non-trivial.
It's easier to copy~~write~~edit an email that to write it from scratch.
Edit: I meant copyedit, not copywrite
Copywriting is writing from scratch, though specifically for marketing.
I wanted to say "copyedit"
Me and my partner alternate doing the cooking. She doesn't know if I'm going to make a mistake and serve her something she doesn't like (it has happened). Does that mean she's better off doing all the cooking herself?
"If it's not perfect, it's useless" is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
Does she put glue on your pizza?
Not yet...
When you cook well, you can eat the food.
When the bot says something, you always need to look up if it's correct. That's the 'cook a new meal from scratch' bit, not the 'taste it' bit.
You need to look things up every time, or do the taste test by asking if the bot's answer 'smells true' (which is tempting, but a bad idea).
If you are using the bot just to perform things that you could easily look up, then yes, that is pointless.
"Food I don't like" as an output isn't really comparible to "information that is factually incorrect."
It's comparable because it's a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
Its subjective vs objective. They're not really comparable at all.
The objective reality of an AI hallucination being wrong is not what's important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
It is very important. We're just going to have to agree to disagree.
Well you certainly aren't giving me any reason to agree... :/
The petaflops sometimes... flop. The only use case I personally have for llms, and they are brilliant at that, is when a word just won't come to mind—I can give it a precise description of it but my brain refuses to produce the word, in English nor Spanish.