Advertisement

Surveyed Oncologists’ Attitudes Toward Ethical Implications of AI in Cancer Care


Advertisement
Get Permission

Researchers surveyed oncologists for their perspectives on how artificial intelligence (AI) may be responsibly integrated into some aspects of cancer care as well as how to protect patients from the hidden biases of AI, according to a recent study published by Hantel et al in JAMA Network Open.

Background

As AI continues to make its way into cancer care as well as discussions between physicians and patients, oncologists have begun to grapple with the ethics of its use in medical decision-making.

AI is currently used in cancer care as a diagnostic tool for detecting tumor cells on pathology slides and identifying tumors on x-rays and other radiology images. New AI models are being developed that can assess a patient’s prognosis and may soon be able to offer treatment recommendations. This capability has raised concerns over legal responsibility in the instance that an AI-recommended treatment results in harm to a patient.

“AI is not a professionally licensed medical practitioner, yet it could someday be making treatment decisions for patients. Is AI going to be its own practitioner, will it be subject to licensing, and who are the humans who could be held responsible for its recommendation? These are the kind of medicolegal issues that need to be resolved before the technology is implemented,” stressed co–lead study author Andrew Hantel, MD, a faculty member in the Division of Leukemia and Population Sciences at Dana-Farber Cancer Institute. “AI has the potential to produce major advances in cancer research and treatment, but there hasn’t been a lot of education for stakeholders—the physicians and others who will use this technology—about what its adoption will mean for their practice. It’s critical that we assess now, in the early stages of AI’s application to clinical care, how it will impact that care and what we need to do to make sure it’s deployed responsibly. Oncologists need to be part of that conversation. This study seeks to begin building a bridge between the development of AI and the expectations and ethical obligations of its end-users,” he added.

Survey Results

In the recent survey, the researchers asked 204 oncologists across the United States to answer questions regarding their views on the ethical implications of AI in cancer care.

Among the respondents, 85% stated that oncologists should be able to explain how AI models work; however, only 23% of the respondents thought patients needed the same level of understanding when considering a treatment option. More than 81% of the respondents answered that patients should give their consent for AI tools to be incorporated into the treatment decision-making process.

When asked what they would do if an AI system selected a treatment regimen that differed from the one they planned to recommend, 37% of the respondents suggested they would present both options to the patient for the final decision—which was the most common answer.

Additionally, 91% of the respondents indicated that AI developers should bear responsibility for medical or legal problems arising from the use of AI. This response was much higher than the 47% who thought the responsibility should be shared with physicians and the 43% who thought it should be shared with hospitals.

Although 76% of respondents noted that oncologists should protect patients from biased AI tools—reflecting inequities in medical database representation—only 28% of them were confident they could identify AI models containing such bias.

Conclusions

“The findings provide a first look at where oncologists are in thinking about the ethical implications of AI in cancer care. [W]hile nearly all oncologists felt AI developers should bear some responsibility for treatment decisions generated by AI, only half felt that responsibility also rested with oncologists or hospitals,” underscored Dr. Hantel. “Our study gives a sense of where oncologists currently land on this and other ethical issues related to AI and, we hope, serves as a springboard for further consideration of them in the future,” he concluded.

Disclosure: The research in this study was supported by the National Cancer Institute of the National Institutes of Health, the Dana-Farber McGraw/Patterson Research Fund for Population Sciences, and a Mark Foundation Emerging Leader Award. For full disclosures of the study authors, visit jamanetwork.com.

The content in this post has not been reviewed by the American Society of Clinical Oncology, Inc. (ASCO®) and does not necessarily reflect the ideas and opinions of ASCO®.
Advertisement

Advertisement




Advertisement