1571
Everything's fine. Nothing to see here.
(thelemmy.club)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
When you delegate, to a person, a tool or a process, you check the result. You make sure that the delegated tasks get done and correctly and that the results are what is expected.
Finding that it is not the case after months by luck shows incompetence. Look for the incompetent.
Yeah. Trust is also a thing, like if you delegate to a person that you've seen getting the job done multiple times before, you won't check as closely.
But this person asked to verify and was told not to. Insane.
100%
Hallucinations are widely known, this is a collective failure of the whole chain of leadership.
Problem being is that whoever is checking the result in this case had to do the work anyway, and in such a case... why bother with the LLM that can't be trusted to pull the data anyway?
I suppose they could take the facts and figures that a human pulled and have an LLM verbose it up for people who for whatever reason want needlessly verbose BS. Or maybe an LLM can do a review of the human generated report to help identify potential awkward writing or inconsistencies. But delegating work that you have to do anyway to double check the work seems pointless.
Like someone here said "trust is also thing". Once you check a few time that the process is right and the result are right, you don't need to check more than ponctually. Unfortunatly, that's not what happened in this story.