Advertisement

Study Examines Utility, Accuracy of ChatGPT in Offering Breast Cancer Screening Recommendations


Advertisement
Get Permission

A new study suggests that the answers generated by the artificial intelligence (AI) chatbot ChatGPT may provide correct breast cancer screening advice the vast majority of the time; however, the information is sometimes inaccurate or even fictitious, according to the report published by Haver et al in Radiology. As more consumers turn to ChatGPT for health advice, researchers are eager to see whether the information provided by the AI chatbot is reliable and accurate.

Study Methods and Results

In the new study, the researchers created a set of 25 questions designed to generate advice on breast cancer screening and submitted each question to ChatGPT three times—since the chatbot is known to vary its responses with repeated questions. Three radiologists who were fellowship-trained in mammography then evaluated the responses and discovered that they were appropriate for 88% (n = 22/25) of the questions. The chatbot did, however, provide one answer based on outdated information and two inconsistent answers that varied significantly for other questions when asked multiple times.

“We found [that] ChatGPT answered questions correctly about 88% of the time—which is pretty amazing,” highlighted corresponding study author Paul Yi, MD, Assistant Professor of Diagnostic Radiology and Nuclear Medicine at the University of Maryland School of Medicine and Director of the University of Medicine Medical Intelligent Imaging Center. “It also has the added benefit of summarizing information into an easily digestible form for consumers to understand,” he added.

The researchers noted that ChatGPT correctly answered questions about the symptoms of breast cancer; who may be at risk; and questions on the cost, age, and frequency recommendations concerning mammograms. However, they warned that the downside to utilizing this technology is that the chatbot is not as comprehensive in its responses as answers individuals might normally find with a Google search. 

“ChatGPT provided only one set of recommendations on breast cancer screenings, issued from the American Cancer Society, but did not mention differing recommendations put out by the Centers for Disease Control and Prevention (CDC) or the U.S. Preventative Services Task Force (USPSTF),” stressed lead study author Hana Haver, MD, a radiology resident at the University of Maryland Medical Center.

In one response deemed inappropriate by the researchers, ChatGPT provided an outdated response to planning a mammogram around COVID-19 vaccination. The advice to delay a mammogram for 4 to 6 weeks after getting a COVID-19 vaccine was changed in February 2022. The CDC currently endorses the USPSTF guidelines—which advise against delaying a mammogram. Further, inconsistent responses were given to questions concerning an individual’s personal risk of getting breast cancer and where they could receive a mammogram.

Conclusions

“We’ve seen, in our experience, that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims,” emphasized Dr. Yi. “Consumers should be aware that these are new, unproven technologies and should still rely on their doctors, rather than ChatGPT, for advice,” he explained.

The researchers are currently analyzing how ChatGPT fares for lung cancer screening recommendations as well as ways to improve the responses made by ChatGPT—in order to receive more accurate, complete, and understandable answers, especially for those without a high level of education.

“With the rapid evolution of ChatGPT and other large language models, we have a responsibility as a medical community to evaluate these technologies and protect our patients from potential harm that may come from incorrect screening recommendations or outdated preventive health strategies,” underscored Mark T. Gladwin, MD, the John Z. and Akiko K. Bowers Distinguished Professor of Medicine, Vice President of Medical Affairs, and Dean of the University of Maryland School of Medicine.

The researchers concluded that although ChatGPT showed promise, the technology may require further study and physician oversight when offering advice on breast cancer screenings as a result of inappropriate and inconsistent recommendations.

Disclosure: For full disclosures of the study authors, visit pubs.rsna.org.

 

The content in this post has not been reviewed by the American Society of Clinical Oncology, Inc. (ASCO®) and does not necessarily reflect the ideas and opinions of ASCO®.
Advertisement

Advertisement




Advertisement