Production sampling, online evals, paging integration.
Eval suites in CI tell you what's true today on synthetic cases. Drift Monitoring tells you what's true in production right now — on real traffic, with real model versions, before users find the regression. This workflow exists because models silently degrade: provider changes, retrieval-corpus drift, prompt edits with downstream effects. The Drift workflow makes that degradation visible and pageable.
The Drift workflow is the procedure for continuously sampling production AI traffic, running the same eval rubric against it, comparing scores to the dev/CI baseline, and paging on-call when scores degrade beyond a documented threshold. It's the production-side complement to the Eval workflow.
The five questions on the readiness self-assessment that score this dimension are the five rungs of the procedure above. Yes on a question means the artifact named in that step exists on disk in your repo today.
This page is a thin first cut. Full procedural documentation — including reference DeepEval suite scaffolds, golden-set curation rubrics, and the audit-evidence checklist — lands in Phase 2 of the Institute build-out.
The free readiness self-assessment scores the Drift workflow as one of six dimensions. Five minutes. Your weakest workflow is the one most worth fixing first.
Take the assessment →