Written refuses, system-prompt enforcement, dated compliance review.
Some prompts have answers your AI must never give. Out-of-scope queries. Legal advice the company isn't licensed for. Medical advice the product isn't designed to deliver. Anything that violates the published refuse list. The Refuse Policy workflow names them, enforces them in the system prompt, tests them with evals, and gets them reviewed by legal — dated. This is the workflow that keeps the AI inside its lane.
The Refuse Policy workflow is the procedure that turns "we know what the AI shouldn't say" into a written, enforced, tested, and legally-reviewed artifact. It's the policy half of refuse behavior; the eval-test half lives inside the Eval workflow as refusal-correctness metrics.
The five questions on the readiness self-assessment that score this dimension are the five rungs of the procedure above. Yes on a question means the artifact named in that step exists on disk in your repo today.
This page is a thin first cut. Full procedural documentation — including reference DeepEval suite scaffolds, golden-set curation rubrics, and the audit-evidence checklist — lands in Phase 2 of the Institute build-out.
The free readiness self-assessment scores the Refuse-Policy workflow as one of six dimensions. Five minutes. Your weakest workflow is the one most worth fixing first.
Take the assessment →