AI in Practice Olga Lavinda, PhD AI in Practice Olga Lavinda, PhD

What AI Failure Actually Costs

AI failure in healthcare is not a future risk. It is a present pattern — and it is being actively obscured by metrics that measure the wrong things.

The industry keeps shipping. The governance infrastructure does not exist. Here is what that gap is actually costing — clinically, financially, and in the procurement rooms where health AI deals live or die.

Read More
AI Governance Olga Lavinda, PhD AI Governance Olga Lavinda, PhD

After the Launch: Why Post-Deployment Monitoring Is the Part of Health AI Governance Nobody Has Built

Most health systems treat validation as the finish line for AI deployment. In reality, it is the moment models begin drifting away from the clinical reality they were trained to represent. Post-deployment monitoring — the infrastructure required to detect that drift — is the part of AI governance that almost nobody has built yet. This article examines why performance degradation is inevitable, why current validation methods fail to predict it, and what monitoring systems health systems must implement before regulators force the issue.

Read More
AI Literacy & Education Olga Lavinda, PhD AI Literacy & Education Olga Lavinda, PhD

How to Integrate AI Literacy into Health Professions Curriculum: A Practitioner Framework

Faculty are being asked to teach AI literacy without a structured methodology for doing so. This article proposes a three-level practitioner framework — foundational literacy, validation skills, and governance competency — grounded in the RIGOR™ Framework and mapped to UNESCO, ACGME, and FDA–EMA Good AI Practice principles.

Read More