1569
submitted 2 days ago* (last edited 2 days ago) by mayabuttreeks@lemmy.ca to c/fuck_ai@lemmy.world

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[-] jj4211@lemmy.world 4 points 1 day ago

Problem being is that whoever is checking the result in this case had to do the work anyway, and in such a case... why bother with the LLM that can't be trusted to pull the data anyway?

I suppose they could take the facts and figures that a human pulled and have an LLM verbose it up for people who for whatever reason want needlessly verbose BS. Or maybe an LLM can do a review of the human generated report to help identify potential awkward writing or inconsistencies. But delegating work that you have to do anyway to double check the work seems pointless.

[-] pseudo@jlai.lu 1 points 1 day ago

Like someone here said "trust is also thing". Once you check a few time that the process is right and the result are right, you don't need to check more than ponctually. Unfortunatly, that's not what happened in this story.

this post was submitted on 15 Feb 2026
1569 points (99.6% liked)

Fuck AI

5858 readers
1608 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS