563
Chatbots Make Terrible Doctors, New Study Finds
(www.404media.co)
This is a most excellent place for technology news and articles.
Not only is their bias inherent in the system, it's seemingly impossible to keep out. For decades, from the genesis of chatbots, they've had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.
That is before this administration leaned on the AI providers to make sure the AI isn't "Woke." I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.
So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they will become mecha hitler if left to learn in no time at all, and then worse.
Even if we narrowed the scope of training data exclusively to professionals, we would have issues with, for example, racial bias. Doctors underprescribe pain medications to black people because of prevalent myths that they are more tolerant to pain. If you feed that kind of data into an AI, it will absorb the unconscious racism of the doctors.
And that's in a best case scenario that's technically impossible. To get AI to even produce readable text, we have to feed a ton of data that cannot be screened by the people pumping it in. (AI "art" has a similar problem: When people say they trained AI on only their images, you can bet they just slapped a layer of extra data on top of something that other people already created.) So yeah, we do get extra biases regardless.
There is a lot of bias in healthcare as well against the poor, anyone with lousy insurance is treated way way worse. Woman in general are as well. Often disbelieved, and conditions chalked up to hysteria, which often misses real conditions. People don't realize just how hard diagnosis is, and just how bad doctors are at it, and our insurance run model is not great at driving good outcomes.