563
submitted 1 day ago* (last edited 1 day ago) by XLE@piefed.social to c/technology@lemmy.world

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

you are viewing a single comment's thread
view the rest of the comments
[-] Buddahriffic@lemmy.world 10 points 10 hours ago

Funny because medical diagnosis is actually one of the areas where AI can be great, just not fucking LLMs. It's not even really AI, but a decision tree that asks about what symptoms are present and missing, eventually getting to the point where a doctor or nurse is required to do evaluations or tests to keep moving through the flowchart until you get to a leaf, where you either have a diagnosis (and ways to confirm/rule it out) or something new (at least to the system).

Problem is that this kind of a system would need to be built up by doctors, though they could probably get a lot of it there using journaling and some algorithm to convert the journals into the decision tree.

The end result would be a system that can start triage at the user's home to help determine urgency of a medical visit (like is this a get to the ER ASAP, go to a walk-in or family doctor in the next week, it's ok if you can't get an appointment for a month, or just stay at home monitoring it and seek medical help if x, y, z happens), then it can give that info to the HCW you work next with for them to recheck things non-doctors often get wrong and then pick up from there. Plus it helps doctors be more consistent, informs them when symptoms match things they aren't familiar with, and makes it harder to excuse incompetence or apathy leading to a "just get rid of them" response.

Instead people are trying to make AI doctors out of word correlation engines, like the Hardee boys following a clue of random word associations (except reality isn't written to make them right in the end because that's funny like in South Park).

[-] sheogorath@lemmy.world 3 points 2 hours ago

Yep, I've worked in systems like these and we actually had doctors as part of our development team to make sure the diagnosis is accurate.

[-] selokichtli@lemmy.ml 3 points 6 hours ago

Have you seen LLMs trying to play chess? They can move some pieces alright, but at some point it's like they just decide to put their cat in the middle of the board. Now, true chess engines are playing at their own level, not even grandmasters can follow.

[-] XLE@piefed.social 3 points 8 hours ago* (last edited 6 hours ago)

I think ~~I~~ you just described a conventional computer program. It would be easy to make that. It would be easy to debug if something was wrong. And it would be easy to read both the source code and the data that went into it. I've seen rudimentary symptom checkers online since forever, and compared to forms in doctors' offices, a digital one could actually expand to relevant sections.

Edit: you caught my typo

[-] nelly_man@lemmy.world 2 points 1 hour ago

They're talking more about Expert Systems or Inference Engines, which were some of the earlier forms of applications used in AI research. In terms of software development, they are closer to databases than traditional software. That is, the system is built up by defining a repository of base facts and logical relationships, and the engine can use that to return answers to questions based on formal logic.

So they are bringing this up as a good use-case for AI because it has been quite successful. The thing is that it is generally best implemented for specific domains to make it easier for experts to access information that they can properly assess. The "one tool for everything in the hands of everybody" is naturally going to be a poor path forward, but that's what modern LLMs are trying to be (at least, as far as investors are concerned).

[-] Buddahriffic@lemmy.world 3 points 7 hours ago

(Assuming you meant "you" instead of "I" for the 3rd word)

Yeah, it fits more with the older definition of AI from before NNs took the spotlight, when it meant more of a normal program that acted intelligent.

The learning part is being able to add new branches or leaf nodes to the tree, where the program isn't learning on its own but is improving based on the expeirences of the users.

It could also be encoded as a series of probability multiplications instead of a tree, where it checks on whatever issue has the highest probability using the checks/questions that are cheapest to ask but afffect the probability the most.

Which could then be encoded as a NN because they are both just a series of matrix multiplications that a NN can approximate to an arbitrary %, based on the NN parameters. Also, NNs are proven to be able to approximate any continuous function that takes some number of dimensions of real numbers if given enough neurons and connections, which means they can exactly represent any disctete function (which a decision tree is).

It's an open question still, but it's possible that the equivalence goes both ways, as in a NN can represent a decision tree and a decision tree can approximate any NN. So the actual divide between the two is blurrier than you might expect.

Which is also why I'll always be skeptical that NNs on their own can give rise to true artificial intelligence (though there's also a part of me that wonders if we can be represented by a complex enough decision tree or series of matrix multiplications).

[-] _g_be@lemmy.world 2 points 3 hours ago

could be a great idea if people could be trusted to correctly interpret things that are not in their scope of expertise. The parallel I'm thinking of is IT, where people will happily and repeatedly call a monitor "the computer". Imagine telling the AI your heart hurts when it's actually muscle spasms or indigestion.

The value in medical professionals is not just the raw knowledge but the practice of objective assessment or deduction of symptoms, in a way that I didn't foresee a public-facing system being able to replicate

[-] Buddahriffic@lemmy.world 1 points 19 minutes ago

Over time, the more common mistakes would be integrated into the tree. If some people feel indigestion as a headache, then there will be a probability that "headache" is caused by "indigestion" and questions to try to get the user to differentiate between the two.

And it would be a supplement to doctors rather than a replacement. Early questions could be handled by the users themselves, but at some point a nurse or doctor will take over and just use it as a diagnosis helper.

this post was submitted on 09 Feb 2026
563 points (98.8% liked)

Technology

80978 readers
5306 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS