The FDA’s Warning on AI in GMP: What the use of AI means for Industry Compliance
5 min read
FDA recently issued a warning letter related to the use of artificial intelligence (AI) in Good Manufacturing Practice (GMP) activities. This development has garnered wide attention, turning focus to the critical implications of AI use in regulated environments.
AI Without Oversight
The warning letter was directed at a pharmaceutical manufacturing company that utilized AI to develop drug product specifications, standard operating procedures (SOPs), and master production records. However, the core issue was not the use of AI itself, but the absence of a review process by authorized personnel within the quality unit. This oversight signifies a broader concern: AI-generated outputs were being accepted without human validation, contrary to established regulatory requirements.
The company had also neglected process validation—a fundamental GMP requirement—because the AI tool did not identify it as a necessary step. This oversight underscores the inherent risk when AI is overly relied upon without oversight and understanding.
AI in Compliance
There’s a growing notion of “AI for compliance,” where AI tools are perceived as advanced autocomplete systems for regulated documents. They produce outputs that appear correct, ticking obvious boxes but potentially missing nuanced regulatory requirements. The real danger lies in the users’ unawareness: both the AI and the personnel relying on it might not know what is missing.
FDA’s Stance: Human Oversight
The FDA states that AI-derived outputs used in GMP activities must be subject to review and approval by the quality unit. This principle is foundational and has always been integral to regulated document creation. While AI accelerates the pace and expands the scope of documentation, it doesn’t alter the essential obligation of human oversight.
This incident serves as an example of what AI use in scientific and regulatory contexts truly represents. It’s not enough for AI to generate regulatory content; it must fit within a validated, reviewable workflow. Establishing such a system is more challenging but critical for compliance.
These issues highlighted the company’s inadequate quality systems, as outlined in the FDA’s guidance document, “Quality Systems Approach to Pharmaceutical CGMP Regulations.”
The Way Forward
As as AI becomes more integrated into pharmaceutical manufacturing, companies must embed oversight and validation processes into their use of AI technologies. Any AI output needs rigorous review and clearance by authorized quality personnel according to 21 CFR 211.22 and 21 CFR 211.100.
Ensuring AI complements rather than replaces human insight will be key. This development underlines the importance of maintaining a balance between technology and human expertise to safeguard quality and compliance in pharmaceutical manufacturing.
For companies contemplating the use of AI in GMP environments, this warning letter serves as both a cautionary tale and a roadmap toward compliant, effective integration of AI technologies into their operations.
I’m following the conversation on FDA’s recent warning letter, which marks an important moment for AI’s role in regulated industries like pharmaceuticals. While AI holds immense potential to transform our operations, it comes with an equally significant responsibility.
AI should not operate in isolation or replace critical human oversight. Specific training, robust guardrails, and meticulous review processes must be embedded within any AI workflow. Verification and validation are equally mandatory tasks that should be performed by qualified quality personnel to ensure compliance and maintain trust in the output.
At SciNote, we recognize AI’s capability to enhance productivity and innovation, but it must always be supported by stringent regulatory practices. Only through specialty oversight can we leverage AI’s power while safeguarding product quality and maintaining compliance.
Let’s embrace AI in a way that will elevate our standards of output, not to lower them.”
Reference: FDA warning letter
Brendan McCorkle
CEO at SciNote
Steps to Take
To ensure AI outputs are compliant with GMP regulations, a company should follow a structured approach that integrates AI technology into existing regulatory frameworks while maintaining rigorous oversight. Here are specific steps to consider:
By integrating these steps into their quality management systems, companies can leverage AI benefits while ensuring compliance with GMP regulations, safeguarding product quality, and minimizing regulatory risks.
1. Understand Regulatory Requirements
– Familiarize yourself with GMP regulations such as 21 CFR Part 11, which outlines the criteria under which electronic records and signatures are considered trustworthy and equivalent to paper records.
– Review guidance documents like the FDA’s “Quality Systems Approach to Pharmaceutical CGMP Regulations” to understand the broader quality system expectations.
2. Develop AI Governance Policies
– Establish clear policies governing the use of AI in GMP processes, including how AI-generated outputs will be reviewed and validated.
– Define roles and responsibilities specific to AI oversight within the quality unit to ensure accountability.
3. Implement a Review and Approval Process
– Require all AI-generated outputs (e.g., drug specifications, SOPs, production records) to be reviewed and approved by qualified personnel in the quality unit.
– Develop a documented review process that assesses the accuracy, relevancy, and compliance of AI outputs.
4. Conduct Process Validation
– Ensure that AI-influenced processes undergo rigorous process validation to verify that the AI outputs meet predefined specifications and quality criteria.
– Document the validation process, including any deviations and corrective actions taken.
5. Integrate AI into a Validated Quality System
– Incorporate AI tools into a system that is validated as per GMP requirements, ensuring that the system itself is subject to IQ, OQ, and PQ.
– Ensure the AI system maintains data integrity, traceability, and auditability.
6. Training and Competency Development
– Provide regular training for staff involved in AI-related tasks, emphasizing the importance of compliance and the nuances of using AI in regulated environments.
– Ensure that personnel are competent to evaluate and verify AI outputs.
7. Implement Change Management
– Establish a change control process to manage updates and changes in AI algorithms or models, ensuring that changes do not adversely affect compliance.
– Revalidate AI systems as needed following any significant changes.
8. Monitor and Audit AI Output Use
– Continuously monitor the performance of AI systems and conduct regular audits to ensure ongoing compliance with GMP standards.
– Use audit trails to track changes and reviews of AI-generated content.
9. Establish a Risk Management Framework
– Conduct risk assessments to identify and mitigate potential compliance risks associated with the use of AI in GMP activities.
– Develop mitigation strategies to address identified risks, ensuring that AI use does not compromise product quality or safety.
10. Engage with Regulatory Bodies:
– Maintain open communication with regulatory agencies to stay informed about emerging guidelines and expectations regarding AI use in GMP environments.
– Consider seeking advice or clarification from regulators when planning to implement new AI technologies.
By integrating these steps into their quality management systems, companies can leverage AI benefits while ensuring compliance with GMP regulations, safeguarding product quality, and minimizing regulatory risks.




