Advertisement

Understanding the Legal and Ethical Challenges AI Poses in Oncology


Advertisement
Get Permission

The field of oncology is experiencing a revolution driven by artificial intelligence (AI) technology. Artificial intelligence tools are already being used in medical imaging analysis, treatment planning, and even patient counseling. These advancements hold immense promise for earlier cancer detection, more personalized treatment strategies, and improved patient support.

However, this progress also presents significant legal and ethical uncertainties. This column dives into recent applications of AI in oncology while also exploring complex legal and ethical questions that arise from its integration into patient care. It examines the challenges surrounding liability for AI-related medical errors, ethical considerations in AI-based patient counseling, and the potential impact on the role of oncologists.

Using AI in Diagnosis and Treatment

Artificial intelligence tools are being used to enhance review of medical imaging data, including magnetic resonance imaging scans, computed tomography scans, and mammograms. These algorithms excel at identifying subtle patterns that human eyes might overlook, which could translate to faster and more accurate detection of cancer at its earliest stages. The technology has also been used in treatment delivery and decision-making. For radiation therapy, AI technologies have been employed to improve the detection of tumor margins and to better target radiotherapies.1 They have also been used to help select and design immunotherapy regimens.

The use of artificial intelligence in diagnosis presents legal questions if the diagnosis or treatment goes wrong. When human physicians—such as oncologists or radiologists—make an incorrect diagnosis or err when implementing a treatment plan, their legal liability depends on whether their decision-making fell below the legal standard of care. That standard of care is in turn usually defined in terms of the standard that a reasonable physician would recognize as acceptable. So long as a physician’s conduct satisfies the standard of care, the physician will not be found to have engaged in negligent medical practice, even if a poor outcome results. Put another way, physicians are not held strictly liable—liable regardless of fault—for incorrect diagnoses or treatments. They are only liable when the poor outcome resulted from substandard medical practice.

Determining Legal Standards

Because AI tools are not physicians, it remains unclear what legal standard applies or should apply to diagnostic errors resulting from the use of this technology in oncology decision-making. Some have argued that a strict liability standard should apply. Under such a standard, those manufacturing or selling AI products could be held liable for defects in those products without the injured patient needing to prove fault, so long as the product fell below a reasonable consumer’s expectation of safety.2

Others have suggested that AI products would be evaluated under other standards of product liability, such as the “reasonable alternative design” test, which would consider whether the product could have been designed more safely. Applying these product liability standards is further complicated by the fact that many AI tools change after leaving the manufacturer’s or seller’s hands, since their diagnostic algorithm evolves as it sees more data.3


“Although artificial intelligence tools offer promising advancements in diagnosis and treatment, their legal implications are evolving and uncertain.”
— GOVIND PERSAD, JD, PhD

Tweet this quote

Still others have suggested that AI tools should be legally recognized as persons for the purposes of liability.4 This would allow patients to hold hospitals and clinics where AI tools are used vicariously liable for errors by those tools, just as those institutions can be held liable for errors by employees.

Different jurisdictions have adopted different approaches for liability when AI technologies are involved in medical errors. For instance, the European Commission is discussing a proposed AI Liability Directive defining a category of “high-risk AI systems” that likely includes medical AI systems.5 Oncology practices should be aware of the potential for rapid evolution in this area.

Providing Patient Counseling

In addition to assisting oncology professionals with diagnosis and treatment, artificial intelligence systems could also be used as part of patient counseling. Some studies have evaluated the use of AI chatbots to answer patients’ cancer-related questions and have concluded that these chatbots can provide helpful answers but are not completely ready for patient-facing uses.6

One recent study evaluated the use of chatbots to provide patients with breast cancer in-person education about potential genetic dimensions of their condition.7 The study findings showed that participants learned just as much and grasped similar ideas about genetics and testing regardless of whether they interacted with a human counselor or a chatbot. Satisfaction with the experience was also similar. According to the research team, this suggests that AI tools could free up genetic counselors to more intensively help patients with confirmed breast cancer mutations.

Artificial intelligence technologies could also be used to assist patients with cancer with other medical needs. For instance, many patients seek psychological counseling for mental health needs. Some AI tools have been used for general psychological counseling and are also being implemented for counseling of patients with cancer in particular. Some of these tools have been trained on data sets based on conversations between patients in support groups or between patients and their medical team.

Anticipating Future Directions and Issues

The use of AI in patient-facing roles presents legal as well as ethical issues beyond the use of the technology as a diagnostic and treatment tool. If AI is responding to patients’ private and intimate information, as is the case in counseling conversations, data security is legally and ethically crucial. There may also be concerns about whether patients have provided informed consent to receive counseling from an AI system and whether that system may give harmful advice without proper oversight. Harms have occurred in some counseling settings where chatbots gave advice.8

Beyond patient-focused legal and ethical issues, there are also longer-term challenges for oncology practices. One is ensuring that artificial intelligence is used to improve skills and outcomes, rather than “deskilling” oncologists and causing them to lose capacity to effectively assist patients in the areas where AI is used.

In diagnosis, treatment, and counseling, AI could help to improve professionals’ skills and give them more time to interact deeply with their patients. But the technology could also serve as a crutch that masks areas of skill deficiency for professionals. An example might be difficulty in effectively assisting patients with their social and emotional needs.

Another issue is ensuring that oncology workforces are trained to deal with the coming integration of AI into their practices. Professionals who cannot effectively use AI tools will face challenges in maintaining effective practices, just as professionals who struggled with electronic medical records or computer technology faced challenges in prior eras.

Conclusion

Although artificial intelligence tools offer promising advancements in diagnosis and treatment, their legal implications are evolving and uncertain. Understanding these complexities can help us ensure that the technology serves as an effective tool to augment human expertise and ultimately deliver better outcomes for patients with cancer. 

DISCLOSURE: Dr. Persad has received grant funding from the Greenwall Foundation.

REFERENCES

1. Krishnamurthy R, Mummudi N, Goda JS, et al: Using artificial intelligence for optimization of the processes and resource utilization in radiotherapy. JCO Glob Oncol 8:e2100393, 2022.

2. Villasenor J: Products liability law as a way to address AI harms. The Brookings Institution, October 31, 2019. Available at www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms. Accessed April 22, 2024.

3. Duffourc M, Gerke S: Decoding U.S. tort liability in healthcare’s black-box AI era: Lessons from the European Union. Stan Tech. L. Rev 27:1-70, 2024.

4. Duffourc M: Malpractice by the autonomous AI physician. U. Ill. J.L. Tech. & Pol’y, 2023. Available at https://cris.maastrichtuniversity.nl/en/publications/malpractice-by-the-autonomous-ai-physician. Accessed April 22, 2024.

5. Bollans S: EU Artificial Intelligence Liability Directive. June 29, 2023. Available at www.shlegal.com/insights/eu-artificial-intelligence-liability-directive. Accessed April 22, 2024.

6. Winstead E: Can artificial intelligence–driven chatbots correctly answer questions about cancer? National Cancer Institute, October 3, 2023. Available at www.cancer.gov/news-events/cancer-currents-blog/2023/chatbots-answer-cancer-questions. Accessed April 22, 2024.

7. Lerner Research Institute: AI chatbots support education and genetic counseling for patients with breast cancer. December 6, 2023. Available at www.lerner.ccf.org/news/article/?title=AI+chatbots+support+education+and+genetic+counseling+for+patients+with+breast+cancer&id=ec47cef3b46d60aa6d03ce8e38e28a7ef902a521. Accessed April 22, 2024.

8. Wells K: An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR, June 9, 2023. Available at www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea. Accessed April 22, 2024.

Dr. Persad is Associate Professor at the University of Denver Sturm College of Law.

Editor’s Note: The Law and Ethics in Oncology column is meant to provide general information about legal topics, not legal advice. The law is complex, varying from state to state, and each factual situation is different. Readers are advised to seek advice from their own attorney.

Disclaimer: This commentary represents the views of the author and may not necessarily reflect the views of ASCO or The ASCO Post.


Advertisement

Advertisement




Advertisement