view the rest of the comments
Linux
Welcome to c/linux!
Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!
Rules:
-
Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.
-
Be respectful: Treat fellow community members with respect and courtesy.
-
Quality over quantity: Share informative and thought-provoking content.
-
No spam or self-promotion: Avoid excessive self-promotion or spamming.
-
No NSFW adult content
-
Follow general lemmy guidelines.
Me and my partner alternate doing the cooking. She doesn't know if I'm going to make a mistake and serve her something she doesn't like (it has happened). Does that mean she's better off doing all the cooking herself?
"If it's not perfect, it's useless" is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.
Does she put glue on your pizza?
Not yet...
When you cook well, you can eat the food.
When the bot says something, you always need to look up if it's correct. That's the 'cook a new meal from scratch' bit, not the 'taste it' bit.
You need to look things up every time, or do the taste test by asking if the bot's answer 'smells true' (which is tempting, but a bad idea).
If you are using the bot just to perform things that you could easily look up, then yes, that is pointless.
"Food I don't like" as an output isn't really comparible to "information that is factually incorrect."
It's comparable because it's a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.
I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?
Its subjective vs objective. They're not really comparable at all.
The objective reality of an AI hallucination being wrong is not what's important though; what is important is the effect it has on people, which will in part be subjective.
Nothing prevents you from comparing harms and ease of checking.
It is very important. We're just going to have to agree to disagree.
Well you certainly aren't giving me any reason to agree... :/