Elon Musk's AI recommends Ivermectin and anal bleaching, because it's biased. I don't care if what I said was true.
But it is biased.
Imagine doctors using the same AI that convinced that poor, lonely guy to kill himself!
Elon Musk's AI recommends Ivermectin and anal bleaching, because it's biased. I don't care if what I said was true.
But it is biased.
Imagine doctors using the same AI that convinced that poor, lonely guy to kill himself!
I already loved it when I had GPs who googled my symptoms. Now with added made-up nonsense!
I already immediately left a veterinarian for using AI as part of its xray diagnosis process, which may even be somewhat acceptable since computer vision is relatively mature. Fuck if I’m lasting 5 mins with a human doctor that utters the letters “AI.”
Ya an LLM is very different than a vision based machine learning /trained visual model on something specific like x-rays.
Now, if its just a LLM looking at an xray image, that's another story, and it could've been that too.
It is very curious rhetorical move from:
If your doctor isn't using AI, they are incompetent and awful and should be considered malpractice
to
We shouldn't be forcing these Big Government Regulations on the itty bitty small bean doctors who just want to help people
Techno-Libertarianism in a nutshell. It is never a serious analysis of best practices and procedures. Always some hollow appeal to legalism out of one side of the mouth and denouncement of bureaucracy out of the other. And all in pursuit of selling a new line of magic fucking beans to the rubes.
Counting the days until Dr. Oz is talking about LLMs like he talks about ginseng and acai berry juice.
Nah, I work in AI for medicine we have lots of data that it does actually help.
My work specifically looks at images from scans (mostly MRI and X-ray) to diagnose conditions (mostly respiratory) before even senior doctors are able to reliably diagnose it. It's already out working in the world and has saved hundreds of people's lives already.
I also have friends that work in AI diagnosis and they have similar success and just save doctors a ton of time.
That’s an ML application, not random text generators.
There's a difference between properly trained single purpose models and LLMs
AI shouldn’t replace doctors, but using it as a second set of eyes makes sense. The key is keeping a human responsible for the final call.
"Take two rocks and prompt me in the morning."
AI has helped me in medical issues far more than my lame Dr
https://en.wikipedia.org/wiki/Darwin_Awards immediately came to mind.
The number of competent experts who are impressed by an LLM wielded in their specified field, is as vanishingly infinitesimal as legitimate and justifiable invocations of the term ‘AI’.
Those who have expressed the greatest enthusiasm for ‘AI’ are typically the farthest removed from actual, nuanced comprehension.
It’s a grift economy built on statistically luke-warm, vibe lobotomised corpses.
Image recognition to help radiologists find tumors is probably fine; especially since you can usually run those models locally.
These morons think ChatGPT is “conscious” and “was trained on humanity’s collective knowledge”. THAT is the problem with ~~AI Derangement Syndrome~~
EDIT: aw fuck let’s not use that acronym
That edit is a absolute 10/10 I burst out laughing
There are a bunch of studies that in general show there is an effect where, despite what people say and think, they inevitably start to offload decision making to AI inappropriately and it eventually makes them worse. Harvard did a study specifically around radiologists, interestingly enough.
The "only use it as an aid" seems to be a myth.
To me it seems very similar to cocaine.
That’s deep learning, and it’s a well known and understood statistical tool.
AI (statistical predictive models) work best when it's designed for a specific purpose and when the model is too challenging to derive by hand. Detecting tumors is a specific purpose, and doing so manually is challenging enough that it requires specific training. It gets a pass by me.
Predicting protein structures/drug effects: specific purpose, check. Doing it manually, yep, very challenging. Good use of AI.
LLM chatbot: purpose is unclear. Making a non-AI-based chatbot is easy and has been done before. Verdict: useless technology
Or to put it another way, use the right tool for the job don't use the shitty multi tool that does every job passably at best. The only exception to this rule of thumb is the humble spork, but that's a piece of engineering genius that couldn't be replicated by AI pushers.
You know that technology that suggested a deadly mix of drugs to a teen? The same one that routinely suggests people should kill themselves? Your doctor should ignore their years of medical training and see what spicy autocorrect thinks your treatment should be.
Literal blatant HIPPA violation, giving personal medical info to AI companies in exchange for useless and dangerous advice
"Of course it's lupus! You are absolutely correct!"
I would love to see an LLM doctor trained only on the TV show House.
A place to post ridiculous posts from linkedIn.com
(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)