How predictions are validated.
Predictive systems in HealthOS are institutional capabilities under the Advisory Principle. This page documents how Veronara evaluates predictions before deployment, monitors them after deployment, defines drift, and triggers institutional review or model retirement.
Last reviewed:
Criteria published before any prediction goes live.
Each predictive system has a published evaluation document specifying the prediction target, the patient population, the performance metrics, the threshold for clinical-grade performance, and the failure modes. The document is dated and preserved without silent edit. Updates are versioned.
Prediction target. What is being predicted. What clinical or operational decision the prediction is intended to inform.
Patient population. Inclusion and exclusion criteria. Demographic and clinical scope.
Performance metrics. Sensitivity, specificity, calibration, time-to-event distribution where relevant. Cross-validated and held-out.
Demographic stability. Performance evaluated across age, sex, ethnicity, and comorbidity strata. Material variance flagged.
Failure modes. What clinical consequence follows from a wrong prediction. Override pathway documented.
Post-deployment monitoring
Performance against actuals, continuously.
Each prediction is paired with an outcome event. As outcomes accrue, the predicted value is compared to the actual outcome on the institutional substrate. Performance is evaluated against the pre-deployment criteria continuously, not periodically.
Material variance from the published criteria triggers institutional review. The review is performed by the Clinical Advisory Board with the responsible clinical lead, the model owner, and external clinical input as appropriate.
Drift triggers
When a model's performance drops.
Drift triggers are defined per prediction, before deployment. Triggers include: measured performance crossing a published threshold; demographic-stratum performance diverging beyond a published variance; outcome distribution shifting beyond a published profile; clinician override rate exceeding a published baseline.
On a drift trigger: the prediction is preserved; clinical leadership is notified; institutional review is initiated; the prediction may be flagged in the clinician's workflow as "under review" until the institutional review concludes.
Model retirement
When a model is removed from service.
A predictive system may be retired by Clinical Advisory Board recommendation, executive decision, or sunset of the underlying clinical context. Retirement is an institutional event — documented, dated, and preserved on the institutional record. The model version, the period of service, and the reason for retirement are part of the audit trail.
Publication commitment
Evaluation results are published.
Per-institution evaluation results are published on a recurring cadence appropriate to the institution and the prediction. Aggregate evaluation patterns — across institutions and across model versions — appear in Veronara Insights and (when appropriate) peer-reviewed publications. The methodology is public; the results are dated; the record is preserved.
Veronara Intelligence Office · Reviewed quarterly by the Clinical Advisory Board.
Last reviewed . Propose a correction to corrections@veronara.com.