The FDA's Good AI Practice Guidance
Signals a Structural Shift
Olga Lavinda, PhD · Founder & CEO, Health AI LLC · March 2026
A new joint FDA/EMA framework defines what responsible AI governance looks like in practice. The challenge for most organizations is not philosophical. It is operational. Most AI systems are validated for regulators and then discovered to be commercially worthless. RIGOR™ closes that gap by defining how evidence is generated after deployment.
For years, "responsible AI" has been a slogan. The FDA and EMA have just started turning it into an operational requirement. On January 14, 2026, the two agencies jointly released the Guiding Principles of Good AI Practice in Drug Development: ten principles governing the safe, responsible, and transparent use of artificial intelligence across the full product lifecycle.
The document is currently non-binding. Its significance is structural. When the FDA and EMA (together representing the world's two largest pharmaceutical markets) publish aligned expectations and explicitly state those expectations will underpin future guidance in both jurisdictions, the practical effect on submission review, inspection readiness, and vendor due diligence is immediate. Legal analysts across the industry reached the same conclusion: early alignment is strategically necessary, not optional.
The Gap Is Operational, Not Philosophical
Most healthcare organizations don't resist the idea of responsible AI. The stated commitments to safety, transparency, and accountability are genuine. The problem is the distance between principle and practice, and the data on that distance is unambiguous.
A late-2025 survey of 182 hospital leaders found that 70 percent had experienced at least one AI pilot failure attributable to weak endpoints, workflow misalignment, or data gaps. Only 22 percent said they were confident they could produce a complete AI audit trail within thirty days for regulators or payers. Most deployed AI systems have no mechanism to answer the question "what happened after it was used?" ever.
The pattern extends beyond hospital systems. The 2026 HealthEdge Annual Payer Report found that while nearly all health payers have deployed AI in some form, only 31 percent have fully defined governance models and controls in place. The implication is direct: organizations with post-deployment real-world evidence will bill under new 2026 CMS CPT codes for AI-enabled services. Organizations without it will not.
What the Guidance Operationally Demands
Translated from regulatory language into operational requirements, the ten FDA/EMA principles describe a lifecycle model: requirements defined before deployment, architectural and validation decisions made with accountability in mind, governance structures in place during operation, evidence of real-world performance, and continuous monitoring thereafter.
The practical implication is significant: an AI system cannot be governed the way a plug-in is managed. It requires defined requirements before selection, structured validation before deployment, accountability structures during operation, documented real-world performance evidence, and active monitoring for drift and degradation after go-live. Very few healthcare organizations currently have that lifecycle infrastructure in place.
A System Built for This Moment
The emerging regulatory consensus implicitly assumes organizations already possess a structured lifecycle model for AI governance. The data makes clear that most do not. The RIGOR™ Framework was built as that operational model: not as a response to this guidance, but as an independent answer to the same underlying problem that Good AI Practice now formally defines.
For organizations building with this kind of structured lifecycle model, the FDA/EMA guidance is confirmation. For organizations that have not, it defines precisely what needs to be built. The question is no longer whether healthcare AI will be held to rigorous validation standards. The question is whether you build the evidence architecture before or after a consequential failure makes the decision for you.
References
1. FDA and EMA. Guiding Principles of Good AI Practice in Drug Development. January 14, 2026. fda.gov
2. Black Book Research. Hospital AI Governance Survey (n=182, Oct-Nov 2025). Becker's Hospital Review, November 12, 2025.
3. HealthEdge. 2026 Annual Payer Report: Healthcare AI Trends 2026.
4. Hussein R, et al. Advancing healthcare AI governance through a comprehensive maturity model. npj Digital Medicine. 2026.
5. RIGOR™ Framework, Health AI LLC. healthai.com/rigor