A survey study has shown cautious patient support for the use of artificial intelligence (AI) as a second reader in screening mammograms, according to results published in Radiology: Imaging Center.
The study authors sought to determine the sentiments of patients regarding AI use in mammogram screenings. Applying AI to mammography screening has been proposed as one of the most efficient ways of using AI to improve radiologic workflows. However, concerns still exist around data privacy, security, bias, and ethical issues.
“Patient perspectives are crucial because successful AI implementation in medical imaging depends on trust and acceptance from those we aim to serve,” said study author Basak E. Dogan, MD, Clinical Professor of Radiology and Director of Breast Imaging Research at The University of Texas Southwestern Medical Center in Dallas. “If patients are hesitant or skeptical about AI’s role in their care, this could impact screening adherence and, consequently, overall health-care outcomes.”
Study Methods and Results
The study authors gave a 29-question survey to all patients undergoing mammography screening at The University of Texas Southwestern Medical Center between February and August 2023. A total of 518 people participated in the survey. The majority were between the ages of 40 and 69, college graduates or had received higher education, and non-Hispanic White. According to the survey results, 76.5% of participants reported minimal to no knowledge of AI.
A total of 4.4% of respondents were okay with stand-alone AI interpretation of mammograms, and 71.0% wanted AI to be the second reader of mammograms. In the case of an AI-reported abnormal screening, 88.9% wanted a radiologist then to review the scan compared with 51.3% who were okay with AI reviewing radiologist recalls (P < .001).
If there was a discrepancy between reviews, higher rates of respondents were okay with undergoing diagnostic evaluation from a radiologist recall than an AI recall (94.2% vs 92.6%; P = .20).
Higher AI acceptance was noted among participants with higher levels of education (odds ratio [OR] = 2.05; 95% CI = 1.31–3.20; P = .002). A higher concern for bias was noted among Hispanic respondents compared with non-Hispanic White participants (OR = 3.32; 95% CI = 1.15–9.61; P = .005) and with non-Hispanic Black participants vs non-Hispanic White participants (OR = 4.31; 95% CI = 1.50–12.39; P = .005).
“Our study shows that trust in AI is highly individualized, influenced by factors such as prior medical experiences, education, and racial background,” Dr. Dogan said. “Incorporating patient perspectives into AI implementation strategies ensures that these technologies improve and not hinder patient care, fostering trust and adherence to imaging reports and recommendations.”
Disclosure: For full disclosures of the study authors, visit pubs.rsna.org.