In January 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) jointly introduced ten Guiding Principles for Good AI Practice in Drug Development, marking a significant step toward regulatory alignment on the responsible use of artificial intelligence across the medicines lifecycle.
These principles are not a formal regulatory standard or binding regulation. Rather, they represent a shared framework that will underpin future AI-specific guidance from both agencies, covering everything from early-stage research and clinical trials to manufacturing and post-market safety monitoring.
For pharmaceutical sponsors, contract research organizations (CROs), software and data vendors, and manufacturing partners, understanding and aligning with these principles now offers a strategic advantage as regulatory expectations continue to evolve.
Why AI Is Transforming Drug Development
Traditional drug development is a complex, resource-intensive process. Artificial intelligence is increasingly acting as a powerful enabler, helping research teams sift through large volumes of data, optimize clinical trial design, and support evidence generation for regulatory submissions.
The FDA and EMA are now working to align their regulatory expectations for AI governance, aiming to build a consistent global framework that supports innovation while preserving patient safety and data integrity. This collaborative effort is designed to:
- Align regulatory thinking on AI across major jurisdictions
- Reduce the risk of divergent requirements for globally operating sponsors
- Support consistent expectations for how AI-generated evidence should be validated and documented
- Lay the groundwork for future binding guidance in both the EU and the US
AI is already being applied across key areas of drug development, including:
- Workflow automation in pharmacovigilance and safety reporting
- Predictive risk monitoring during clinical trials
- Patient stratification and recruitment optimization
- Regulatory intelligence and submission analysis
- AI-enabled manufacturing process control and quality management
10 Guiding Principles for Good AI Practice in Drug Development
On 14 January 2026, the FDA and EMA jointly released the Guiding Principles of Good AI Practice in Drug Development a set of 10 high-level principles intended to guide the safe, responsible, and transparent use of AI across the product lifecycle.
The principles apply to sponsors, CROs, software and data vendors, and other partners involved in designing, validating, deploying, or relying on AI in regulated drug development activities.
1. Human-centric by design- Build AI systems that reflect ethical and human values. Sponsors should consider how AI may affect patients and users and build in appropriate safeguards from the start, with patient interests and public health as the primary goal.
2. Risk-based approach- Match the level of validation, oversight, and safeguards to the AI’s context of use and the associated level of risk. Lower-risk tools require lighter controls; higher-risk tools that inform critical decisions require more rigorous testing and monitoring.
3. Adherence to standards- Ensure AI work complies with applicable legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including Good Clinical Practice (GCP) and Good Manufacturing Practice (GMP).
4. Clear context of use- Clearly define the AI’s intended purpose, scope, data inputs and outputs, and how its results will be used within the broader decision-making process.
5. Multidisciplinary expertise- Integrate a mix of domain expertise throughout the AI lifecycle, including clinical, scientific, data science, software engineering, cybersecurity, and patient safety knowledge matched to the intended use.
6. Data governance and documentation- Thoroughly document data sources, processing steps, and analytical choices in a transparent, traceable, and verifiable way, with appropriate privacy protections in place.
7. Model design and development practices- Follow sound software and system engineering practices in building AI models, with attention to interpretability, explainability, and robustness appropriate to the context of use.
8. Risk-based performance assessment- Evaluate the full system, including how humans interact with AI outputs in real workflows. Validation datasets and performance measures should match the stated context of use.
9. Life cycle management- Implement quality management systems that govern the AI lifecycle from development through deployment, including ongoing monitoring for issues such as data drift and periodic re-evaluation.
10. Clear, essential information- Communicate the AI system’s purpose, performance, limitations, data sources, and interpretation guidance clearly and in plain language to patients, users, and other relevant stakeholders.
Navigating FDA and EMA Expectations for AI in Drug Submissions
While the ten principles are non-binding, regulators at both agencies have indicated they align with what is already expected in practice for AI-supported submissions. Sponsors and their partners should anticipate regulatory scrutiny on the following areas:
- How and from where data used in AI systems was generated
- How data was processed and prepared for model training or validation
- How the model was tested to demonstrate suitability for its intended purpose
- The nature and quality of human oversight over AI-assisted decisions
- Cybersecurity protections applied to AI systems and underlying data
- How performance is monitored over time to detect and address issues such as data drift
A Structured Approach to AI Submission Readiness
Organizations seeking to align with FDA and EMA expectations for AI in drug development should consider a structured framework built around the following steps:
- Define the Regulatory Challenge – Clearly specify the decision or task the AI is intended to support, such as patient risk stratification or manufacturing quality monitoring.
- Outline the Model’s Role – Describe the AI’s function, scope, inputs, outputs, and how its results inform the broader decision or process.
- Assess Potential Risks – Evaluate the degree to which the AI influences outcomes and the potential consequences if the model underperforms.
- Develop a Validation Plan – Create a structured testing plan with defined performance measures, evaluation datasets, and success criteria calibrated to the level of risk.
- Conduct Testing and Validation – Train the model, run validation studies, and perform stress tests to confirm reliability and generalizability.
- Document Findings and Adjustments – Record all results, including any deviations or modifications made during development and validation.
- Confirm Model Readiness – Determine whether the AI is reliable and fit for its intended purpose; implement additional safeguards such as enhanced human oversight if needed.
Key Documentation Elements for AI-Enabled Clinical Trial Submissions
Regulators increasingly expect sponsors submitting AI-enabled clinical trial data to include:
- Model credibility reports describing the validation strategy and outcomes
- Training dataset documentation detailing data provenance, quality controls, and any pre-processing applied
- Algorithm version tracking to ensure full traceability across the development lifecycle
- Bias evaluation evidence demonstrating fair patient selection and unbiased endpoint assessment
- Clear model specifications describing architecture, inputs, outputs, and decision logic
- Pre-approved change control plans for any adaptive model updates post-submission
Tips for successful engagement with regulators include engaging early in the process, providing clear and complete documentation, maintaining transparent lifecycle monitoring records, and ensuring human judgment remains central to all AI-supported decisions.
Opportunities and Challenges for Pharmaceutical Companies
Opportunities
- Streamlined global regulatory submissions through alignment with shared AI expectations
- Reduced risk of regulatory divergence between FDA and EMA requirements
- Competitive advantage for organizations that invest in robust AI governance frameworks early
- Accelerated market access for AI-enabled products in both the US and EU
- Foundation for building regulatory trust in AI-generated evidence
Challenges
- Adapting existing validation and documentation practices to cover AI-specific requirements
- Building multidisciplinary teams with the range of expertise required across the AI lifecycle
- Establishing ongoing lifecycle monitoring systems capable of detecting data drift and performance decline
- Managing the complexity of AI change control and version tracking across long development timelines
- Keeping pace with evolving regulatory guidance as both agencies develop more detailed AI-specific requirements
Conclusion
The FDA and EMA’s joint Guiding Principles for Good AI Practice represent a meaningful step toward regulatory alignment on the responsible use of artificial intelligence in drug development. While these principles are currently non-binding, they signal the direction of regulatory expectations and will increasingly underpin formal guidance in both jurisdictions.
Organizations that proactively establish structured AI governance frameworks, invest in rigorous documentation and validation practices, and embed lifecycle monitoring into their AI operations will be better positioned to navigate the evolving regulatory landscape and accelerate access to patients worldwide.
AI integration in drug development will continue to expand rapidly. Regulatory expectations for transparency, model performance, and lifecycle accountability will grow in parallel. Early alignment is not just good practice, it is a strategic investment in long-term regulatory confidence.
Why Choose DDReg?
DDReg is your strategic partner in navigating the complex global pharmaceutical regulatory services landscape. We go beyond compliance anticipating regulatory changes and aligning strategies with your business goals. Our expert team develops tailored regulatory pathways, ensuring efficient approvals, reduced risks, and seamless market entry. With DDReg, regulatory challenges become opportunities, helping your products reach patients worldwide faster and with confidence.