The draft guidance document titled "Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products" provides a comprehensive framework for using AI in the drug product life cycle. It emphasizes establishing and maintaining the credibility of AI models to ensure their trustworthiness when supporting regulatory decisions regarding drug safety, effectiveness, and quality.
Key Highlights:
- Risk-Based Credibility Assessment Framework:
- This is a step-by-step process designed to assess and establish the credibility of AI models, focusing on the context of use (COU) and model risk. The framework includes defining the question of interest, assessing model risk, and developing a detailed plan to validate the AI model's output. It highlights the importance of rigorous documentation, with steps to assess model adequacy, execute plans, and handle deviations.
- Importance of Data:
- The document stresses that the quality, relevance, and representativeness of data used to train AI models are crucial. The risk of bias due to data quality issues, such as unrepresentative datasets or data drift, is highlighted. Sponsors must ensure that data used in AI models are fit for use, with a focus on accuracy and traceability.
- Model Risk Assessment:
- This involves evaluating the influence of the AI model’s output relative to other evidence sources and the consequences of incorrect model predictions. A model with high influence and significant decision consequences requires more stringent credibility assessment activities.
- Lifecycle Maintenance:
- The guidance discusses the need for continuous monitoring and maintenance of AI models throughout their life cycle. It highlights that models should remain "fit for use" over time, considering potential data changes and evolving environments. Life cycle maintenance ensures that models remain effective and accurate, with any modifications assessed for their impact on model performance.
- Early Engagement with FDA:
- The document encourages sponsors to engage early with the FDA to align expectations regarding the AI model's credibility assessment, model risk, and the COU. Early discussions help identify challenges and ensure that appropriate actions are taken to address them.
- Use of AI in Specific Phases:
- The guidance outlines the potential applications of AI across the drug product life cycle, from nonclinical phases through to postmarketing activities, with examples in clinical development and manufacturing. It offers specific cases where AI can streamline processes, such as using predictive models in clinical pharmacokinetics or automating visual assessments in manufacturing.
Recommendations:
- Sponsors are encouraged to maintain transparency in AI model development and provide detailed documentation of training and testing procedures.
- Given the evolving nature of AI technology, it is crucial to stay engaged with the FDA to discuss any modifications and ensure that models continue to meet regulatory requirements.
This document sets out critical guidelines for stakeholders aiming to integrate AI into drug development and manufacturing processes, offering clear instructions on how to mitigate risks and build credibility for AI models that influence regulatory decisions.