563
Chatbots Make Terrible Doctors, New Study Finds
(www.404media.co)
This is a most excellent place for technology news and articles.
link to the actual study: https://www.nature.com/articles/s41591-025-04074-y
The findings were more that users were unable to effectively use the LLMs (even when the LLMs were competent when provided the full information):
Part of what a doctor is able to do is recognize a patient's blind-spots and critically analyze the situation. The LLM on the other hand responds based on the information it is given, and does not do well when users provide partial or insufficient information, or when users mislead by providing incorrect information (like if a patient speculates about potential causes, a doctor would know to dismiss incorrect guesses, whereas a LLM would constrain responses based on those bad suggestions).
Yes, LLMs are critically dependent on your input and if you give too little info will enthusiastically respond with what can be incorrect information.
Thank you for showing other side of the coin instead of just blatantly disregarding it's usefulness.(Always needs to be cautious tho)
don't get me wrong, there are real and urgent moral reasons to reject the adoption of LLMs, but I think we should all agree that the responses here show a lack of critical thinking and mostly just engagement with a headline rather than actually reading the article (a kind of literacy issue) ... I know this is a common problem on the internet, I don't really know how to change it - but maybe surfacing what people are skipping out on reading will make it more likely they will actually read and engage the content past the headline?