Artificial Intelligence and its Role in Improving Diagnostics: commentary on an article

Introduction

The new computational era of the Artificial Intelligence (AI), coming very fast to the environment of the Clinical Practice, will be in a short period of time, a real help to doctors, nurses and patients. This is not the first time that we try to improve the clinical performance with the aid of an electronic tool, either a web-based symptom checkers or a differential-diagnosis generators, using the keyboard to introduce data of the patient to obtain a list of different diseases, sometimes ranked in terms of probability.

The principal difference between an AI and the previous tools is the amount of data handled by the IA and the possibility of using questions in a more “natural” way.

AI, Clinical Reasoning and Diagnostic Improvement

The incorporation of the an AI into the real clinical practice, is starting through specialties such as Radiology or Laboratory, because one of the advantages, the analysis of data or images, is one of the strengths of this new tool. There are a few papers studying the influence of the AI in the ability to improve clinical reasoning and diagnostic accuracy. We expose here  a comment about a paper that tries to measure the “diagnostic accuracy” of an AI in comparison with the diagnostic approach of real doctors.

Title:  Diagnostic Accuracy of Differential-Diagnosis List Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chieh Complaints: A Pilot Study   (1)

Methods:

Two physicians developed 30 clinical vignettes, using ten significant symptoms. These doctors in turn established the correct diagnosis for each vignette. Once the information for each vignette was provided to ChatGPT, the programme elaborated a list of differential diagnoses, one of ten cases and another list with five possibilities, and tried to detect whether the correct diagnosis was at the top of the list. A differential diagnosis list with five possibilities for each vignette was created by two doctors who were not involved in creating the vignettes.

Outcome

The summary results show that ChatGPT performed with very high diagnostic accuracy when considering the ten-possibility list (93%), but when the list was of five possibilities, it decreased to 83%, while for the physicians who listed five possibilities, the diagnostic accuracy was 98%. A significant difference was that physicians placed the correct diagnosis at the top of the list in 93% of cases, while ChatGPT did so in 53% of listings.

For certain vignettes that were based on symptoms such as fever or joint pain, ChatGPT produced a listing with higher diagnostic accuracy than that of the doctors, including these five possibilities.

 

Personal assessment and comments

The use of an AI in the Diagnostic Process is still in its infancy in terms of practical application, but it will revolutionize professional performance and patient involvement. These techniques are the next step, albeit with a different development, to the Electronic Tools for Diagnostic Support, whose philosophy and computer armamentarium are totally different. Our group carried out a study using a Diagnostic Electronic Tool (4) and obtained, under the conditions of the study, a diagnostic accuracy of 60%, similar for both, the machine and Internal Medicine doctors, but with the peculiarity that the behaviour was different between one and the other, with the tool, providing more diagnostic possibilities, while the doctors recognised more frequently processes that presented atypically. Another conclusion, we draw, is that practitioners are unlikely to change their initial diagnostic assessment, even if the tool suggests other possibilities. In the study discussed in this post, doctors also tend not to change their initial diagnostic impression, something that has also been reported in other studies.

 

Practical Applications

The AI applications in relation to the diagnostic process will be fundamental, due to the speed of response, the more natural form of asking questions and, above all, the fact that they can suggest diagnoses that have not been made by the professional, either due to lack of knowledge, overload or tiredness, as occurs in emergency services. One condition for its implementation is that the professional has easy institutional access and values the diagnostic suggestions presented by the computer application as much as possible. All this will lead to a better overall performance and will certainly help to save lives.

Bibliografía

  1. Hirosawa,T; et al. Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study. Environmental Research and Public Health
  2. Alowais SA, et al. Revolutioning healthCare: the role of artificial intelligence in clinical practice.BMC Med Educ . 2023 Sep 22;23(1):689. doi: 10.1186/s12909-023-04698-z
  3. Liu, J et al. Utility of ChatGPT in Clinical Practice.
  4. Alonso-Carrión, L; et al. Precisión del diagnóstico en medicina interna e influencia de un sistema informático en el razonamiento clínico. Medicina Clínica 2015.

Author: Lorenzo Alonso Carrión

FORO  OSLER

Share