Hasn’t it been demonstrated that AI is better than doctors at medical diagnostics and we don’t use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?
I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.
Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.
An AI diagnostic system is a tool for doctors to use, not a replacement.
Studies have also shown that doctors using AI don’t do better than just doctors but AI on its own does. Although, that one is attributed to the doctors not knowing how to use chatgpt.
Do you have a link to that study? I’d be interested to see what the false positive/negative rates were. Those are the big danger of LLMs being used, and why a trained doctor would be needed.
It is better at simple pattern recognition, but much worse at complex diagnoses.
It is useful as a help to doctors but won’t replace them.
As an example, it can give you a good prediction on who likely has lung cancer out of thousands of CT images. It will completely fuck up prognoses and treatment recommendations though.
Hasn’t it been demonstrated that AI is better than doctors at medical diagnostics and we don’t use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?
I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.
Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.
An AI diagnostic system is a tool for doctors to use, not a replacement.
Studies have also shown that doctors using AI don’t do better than just doctors but AI on its own does. Although, that one is attributed to the doctors not knowing how to use chatgpt.
Do you have a link to that study? I’d be interested to see what the false positive/negative rates were. Those are the big danger of LLMs being used, and why a trained doctor would be needed.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395
It is better at simple pattern recognition, but much worse at complex diagnoses.
It is useful as a help to doctors but won’t replace them.
As an example, it can give you a good prediction on who likely has lung cancer out of thousands of CT images. It will completely fuck up prognoses and treatment recommendations though.
has it? source?
you’re not gonna get one.