The U.S. Food and Drug Administration's (FDA) Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER) have collaborated with the European Medicines Agency (EMA) to develop a set of 10 guiding principles for good uses of artificial intelligence (AI) in drug development. The principles are aimed at industry members and product developers who are using AI to facilitate and accelerate drug/biological product development.
As the use of AI expands through the drug product life cycle, the two regulatory agencies are working to establish good practice standards for industry and product developers. These standards are designed to align with the rigorous quality expectations of drug development, ensuring patient safety and therapeutic benefit. AI technologies hold significant promise in accelerating time-to-market for new drugs, enhancing pharmacovigilance, and reducing reliance on animal testing—objectives that align with broader FDA initiatives. However, without careful oversight in the design, validation, and application of these technologies, there is a risk of generating inaccurate or unreliable results. By proposing a framework for best practices, the agencies aim to foster the responsible integration of AI in drug development.
The principles also identify areas where international organizations can focus to set further regulatory and legal policies and guidelines for the responsible advancement of AI in drug development and the broader field of medicine.
The 10 principles of good practice for AI use in drug development set out by the FDA and EMA are:
- Human-centric by design—for alignment with ethical and human-centric values
- Risk-based approach
- Adherence to standards—including legal, ethical, technical, scientific, cybersecurity, and regulatory standards
- Clear context of use
- Multidisciplinary expertise
- Data governance and documentation—especially for the privacy and protection of sensitive data
- Model design and development practices—promotes transparency, reliability, generalizability, and robustness in the design and development of AI models and technologies
- Risk-based performance assessment
- Life cycle management—stresses scheduled monitoring and periodic re-evaluation of the systems for performance optimization, including to address potential data drift
- Clear, essential information—uses plain language to portray information to the intended audience
To learn more, visit FDA.gov.

