652
Four Eyes Principle (discuss.tchncs.de)
submitted 5 days ago by [email protected] to c/[email protected]

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 43 points 5 days ago

They don't use the generative models for this. The AI's that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.

[-] [email protected] 15 points 5 days ago

That brings up a significant problem - there are widely different things that are called AI. My company's customers are using AI for biochem and pharm research, protein folding, and other science stuff.

[-] [email protected] 3 points 5 days ago

My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is "AI".

[-] [email protected] 2 points 5 days ago

I do have a tech background in addition to being a medical student and it really drives me bonkers that we're calling these overgrown algorithms "AI". The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable "AI" programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.

[-] [email protected] 11 points 5 days ago

Yeah, those models are referred to as "discriminative AI". Basically, if you heard about "AI" from around 2018 until 2022, that's what was meant.

[-] [email protected] 2 points 5 days ago

The discriminative AI's are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else's.

Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don't trust non-human things to actually make decisions.

[-] [email protected] 3 points 4 days ago

They are black boxes, and can even use the same NN architectures as the generative models (variations of transformers). They're just not trained to be general-purpose all-in-one solutions, and have much more well-defined and constrained objectives, so it's easier to evaluate how their performance may be in the real-world (unforeseen deficiencies, and unexpected failure modes are still a problem though).

this post was submitted on 13 Jul 2025
652 points (96.8% liked)

Comic Strips

18209 readers
1890 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS