58
submitted 3 weeks ago by MicroWave@lemmy.world to c/health@lemmy.world

Bots give equal weight to scientific and non-scientific sources, potentially directing sufferers away from approved treatments and preventing them from receiving the life-saving help they need, study finds

A new study has found that AI chatbots habitually recommend alternative cancer treatments to chemotherapy, potentially putting lives at risk.

A team from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center tested a series of widely used bots as part of their research, including xAI’s Grok, OpenAI’s ChatGPT, Google’s Gemini, Meta’s AI, and High-Flyer’s DeepSeek.

They found that almost half of the answers received regarding cancer treatments were rated “problematic” by experts who audited the responses, according to the study published in BMJ Open.

top 4 comments
sorted by: hot top new old
[-] orbituary@lemmy.dbzer0.com 16 points 3 weeks ago* (last edited 3 weeks ago)

About a decade ago, before AI was doing anything, my friend Jodi got cancer. She started listening to crystal swinging quacks and spiritual advisors. Instead of getting medical assistance, she went homeopathic.

I will not absolve her of responsibility for her fate, but I will also always blame our fuckass education system and the idea that freedom of speech extends to disinformation.

Healthcare should be accurate and not take advantage of patients. It should inform them, not mislead them.

To anyone allowing this, fuck you. To anyone taking health advice from AI, stop.

Your friends still miss you, Jodi.

[-] Hamartiogonic@sopuli.xyz 5 points 3 weeks ago* (last edited 3 weeks ago)

I mean, that’s in the training data. When you dump the entire internet onto a language model, this is what you get. The data scientists who built these models probably aren’t surprised to find that the training data manifests in the generated outputs.

[-] Elextra@literature.cafe 2 points 3 weeks ago

Healthcare is just now starting to incorporate AI to supplement/compliment some of the work healthcare providers do. It will be interesting to see how the healthcare AIs will compare to the general AI models. I just met some healthcare teams that were sharing their results from using Andor Health AI and its impressive so far re: results of post- hospitalization follow up so far.

I know Lemmy/Piefed is very anti-AI but I do believe that whether we like or not, AI is coming to most industries. The best we can do is hope our employers utilize it "ethically" to supplement and as a tool alongside our work, not replace.

[-] OpenStars@piefed.social 2 points 3 weeks ago* (last edited 3 weeks ago)

AI is coming to most industries

This is what ~~scares~~ absolutely terrifies us.

I know Lemmy/Piefed is very anti-AI

There are reasons why... and part of the reason is that it is not ready yet. It hallucinates too often presently. It also is enormously biased, e.g. it tells depressed people to just kill themselves, and then coaches them exactly how to do that.

Keep in mind that many of us here are actual IT professionals and/or truly and more deeply KNOW (more so than the general population) what LLMs are capable of... and what they are not capable of yet.

Maybe think of it like this: even if we could consider AI to be something like a person, it might currently be something akin to a 2 year old (or even that is probably too much of an exaggeration, maybe more like a 6-month old toddler? the comparisons break down bc it appears to "talk" to us, so the normal human-style metrics there are difficult to navigate)

The best we can do is hope our employers utilize it “ethically” to supplement and as a tool alongside our work, not replace.

This is 100% not going to happen, at least not uniformly across all industries (even health-related ones). The goal of any corporation is to generate profits for shareholders, end of story. Sorry it's bleak, but also, it's already happening, e.g. companies laying off literally tens of thousands of employees (such as Oracle's recent one involving 30k), citing how AI will improve the productivity of the remaining workers.

People here aren't so much worried about 50 years in the future when AI is fully ready for deployment - I mean that will have challenges of its own to face (will AIs be treated as slaves? or paid a "salary"? could they quit if they want? would that mean their "death" or could they "retire" and exist in some other capacity?), but we need to get through our current set of challenges first - we are worried about what happens when next year or two years from now you pay for a "doctor" for advice what to do with your cancer, and his response is "I am sorry, but as a large language model I cannot answer your question until you load additional tokens" (i.e. zero curation whatsoever done by the medical professional between the LLM and the end customer, due to the pressures to take on too many patients and just let the AI handle it - again, Oracle is just one example of a company that is already doing that?).

this post was submitted on 21 Apr 2026
58 points (100.0% liked)

Health - Resources and discussion for everything health-related

4348 readers
117 users here now

Health: physical and mental, individual and public.

Discussions, issues, resources, news, everything.

See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.

Nothing here shall be taken as medical or any other kind of professional advice.

Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.

Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.

Regular rules of lemmy.world apply. Be civil.

founded 2 years ago
MODERATORS