Diagnostic Scope, AI and Diagnosis: Automation Bias, Spectrum Bias and Base Rate Fallacy
Introduction
In a recent paper published in the Journal “Diagnosis” (1) by Weissman and Zwaan, they define the concept of “Diagnostic scope”, as the different diagnoses that theoretically we can find in a clinical environment, whether it´s a geographical area or a community.
They want to enhance the importance of this concept in relation to the artificial intelligence when used as an aid to clinical diagnosis. They defined other concepts such as “considered scope”, which is the set of diagnoses raised by clinicians and patients during the diagnostic encounter, and ‘observed scope’ like these diagnoses recorded in a moment, correct or not. The objective of the authors naming these concepts is to give advice about the importance of the “scope” when we train an AI as an aid tool for diagnosis, and, at the same time, to enhance some bias in relation to the clinical reasoning process of the provider derived from this “incorrect” training of the AI tool in that particular environment. They want to show the direct interaction between the tool and the possible biases associated during the clinical reasoning process.
AI, Clinical Reasoning and Biases
In the article, the authors enumerate some situations that can occur when the AI tool training doesn´t take account of aspects such as prevalence, geographical distribution or population particularities, and can lead to a biased clinical reasoning. These include:
Automation Bias: this bias occurs when the clinician believes without critic in the recommendation of an AI tool that incorporates uncommon diagnoses, but they have been usually included in the scope, based on availability (false positive diagnoses).
Spectrum Bias: when the AI tool has been trained in a very particular setting or population its accuracy can be compromised if it is applied in other contexts (external validity).
Base Rate Fallacy: a clinician can have this type of bias when he or she ignores the prevalence of a diagnosis in a population when estimating the probability of that disease in a patient, sometimes for representation or availability heuristics. If an AI tool is trained in a database where these diagnoses are included, we will have a false representation of the probabilities of that diagnosis.
Conclusion
AI is a potent tool with an enormous potential to help providers during the diagnostic process. AI has to be “trained” using real data obtained from different sources, but, in the same way as the human mind, the outcome is very dependent on a good quality of data, that means representative of the population, weighted, and the more objective possible.
Bibliography
- Weissman GE, Zwaan L, Bell SK. Diagnostic scope: the AI can´t see what the mind doesn´t know. Diagnosis(Berlin).2024 Dec 4;12(2): 189-196 . LINK HERE
- Jabbour S, et al. Measuring the impact of AI in the diagnosis of hospitalized patients: a randomized clinical vignette survey study. JAMA 2023; 330:2275-84
- Bar-Hillel M. The base-rate fallacy in probability judgment. Acta Psychologica. 1980;44:211-33
Autor: Lorenzo Alonso Carrión
FORO OSLER