34
[Opinion] AI finds errors in 90% of Wikipedia's best articles
(en.wikipedia.org)
This is a most excellent place for technology news and articles.
legitimate use of a LLM
I find that an extremely simplified way of finding out whether the use of an LLM is good or not is whether the output from it is used as a finished product or not. Here the human uses it to identify possible errors and then verify the LLM output before acting and the use of AI isn't mentioned at all for the corrections.
The only danger I see is that errors the LLM didn't find will continue to go undiscovered, but they probably would be undiscovered without the use of the LLM too.
I think the first part you wrote is a bit hard to parse but I think this is related:
I think the problematic part of most genAI use cases is validation at the end. If you're doing something that has a large amount of exploration but a small amount of validation, like this, then it's useful.
A friend was using it to learn the linux command line, that can be framed as having a single command at the end that you copy, paste and validate. That isn't perfect because the explanation could still be off and it wouldn't be validated but I think it's still a better use case than most.
If you're asking for the grand unifying theory of gravity then:
Or it flags something as an error falsely and the human has so much faith in the system that it must be correct, and either wastes time finding the solution or bends reality to “correct” it in a human form of hallucinating bs. Especially dangerous if saying there is an error supports the individual’s personal beliefs
Edit:
I’ll call it “AI-induced confirmation bias” cousin to AI-induced psychosis.
Yes and no. I have enjoyed reading through this approach, but it seems like a slippery slope from this to "vibe knowledge" where LLMs are used for actually trying to add / infer information.
The issue is that some people are lazy cheaters no matter what you do. Banning every tool because of those people isn’t helpful to the rest of humanity.
Don't discard a good technique cause it can be implemented poorly.
"AI" summed up. 95% of the time it's pointless bullshit being shoehorned into absolutely everything. 5% of the time it can be useful.
like Comic Sans
Something weird about corporations spending billions on "the Comic Sans of technology"
Yep. Let it flag potential problems, and have humans react to it, e.g. by reviewing and correcting things manually. AI can do a lot of things quick and efficiently, but it must be supervised like a toddler.
This is an interesting idea:
So… the same as most employees but cheaper.
People here are above average and overestimate the vast majority of humanity.
Wait, you mean using Large Language Model that created to parse walls of text, to parse walls of text, is a legit use?
Those kids at openai would've been very upset if they could read.
Even for that it's mid at best. I try using co-pilot at work often and it makes shit up constantly.
Chatbots aren’t the worst use case, too, even though we are headed in a wrong direction.