As artificial intelligence advances, its uses and capabilities in real-world applications continue to reach new heights that may even surpass human expertise. In the field of radiology, where a correct diagnosis is crucial to ensure proper patient care, large language models, such as ChatGPT, could improve accuracy or at least offer a good second opinion.
To test its potential, graduate student Yasuhito Mitsuyama and Associate Professor Daiju Ueda’s team at Osaka Metropolitan University’s Graduate School of Medicine led the researchers in comparing the diagnostic performance of GPT-4 based ChatGPT and radiologists on 150 preoperative brain tumor MRI reports. Based on these daily clinical notes written in Japanese, ChatGPT, two board-certified neuroradiologists, and three general radiologists were asked to provide differential diagnoses and a final diagnosis.
Subsequently, their accuracy was calculated based on the actual diagnosis of the tumor after its removal. The results stood at 73% for ChatGPT, a 72% average for neuroradiologists, and 68% average for general radiologists. Additionally, ChatGPT’s final diagnosis accuracy varied depending on whether the clinical report was written by a neuroradiologist or a general radiologist. The accuracy with neuroradiologist reports was 80%, compared to 60% when using general radiologist reports.
These results suggest that ChatGPT can be useful for preoperative MRI diagnosis of brain tumors. In the future, we intend to study large language models in other diagnostic imaging fields with the aims of reducing the burden on physicians, improving diagnostic accuracy, and using AI to support educational environments.”
Yasuhito Mitsuyama, graduate student
The findings were published in European Radiology.
Source:
Osaka Metropolitan University
Journal reference:
Mitsuyama, Y., et al. (2024). Comparative analysis of GPT-4-based ChatGPT’s diagnostic performance with radiologists using real-world radiology reports of brain tumors. European Radiology. doi.org/10.1007/s00330-024-11032-8.