01
Audit, not certification.
Real institutes that aren't regulatory bodies don't sell certifications. Gloxx publishes the methodology, conducts audits against it, and writes a dated report scoped to the artifacts reviewed. The report says what we observed and how we scored it against the published rubric. It does not promise an outcome.
The word "certification" never appears as a Gloxx product name. The word "certified" never appears in client-facing copy. The Maturity Model is descriptive — what's actually true at each level — not aspirational. Audits are dated, scoped, and explicit about what was not reviewed. Findings reflect the reviewer's judgment against the published rubric, not a guarantee.
This is a constraint on every Gloxx page, every audit report, and every conversation with a client. The vocabulary on this site — audit, review, rubric, level — is methodological, not regulatory.
The commitment: If you read a Gloxx audit report and find language that implies certification, regulatory attestation, or warranty of outcome, that's a defect in our work. Tell us and we'll publish a dated correction.
02
Methodology over warranties.
We publish more than we sell. The six AI-QA Workflows, the Maturity Model, the Self-Assessment, the Tools catalog, the quarterly Reports, the Journal, the Glossary — all free, all named, all dated. The retainer is the same methodology run on your code by a senior QA lead, every week. There is no proprietary playbook hidden behind the engagement.
This is a deliberate trade. We could lock the methodology behind an enterprise sale. We don't, because methodology is durable and warranties are not. A team that reads the Maturity Model and runs the workflows themselves is a successful outcome for the Institute even if they never become a Gloxx client. A team that hires Gloxx hires the senior accountability and the AI-leveraged execution, not the secret recipe.
Every audit deliverable carries the standard non-warranty footer: "This audit is a methodology-based assessment of release readiness at a point in time. It is not a regulatory certification, compliance attestation, or warranty of any outcome. The Gloxx QA Institute is a publisher of methodology and a service provider; it is not a regulatory or standards body."
The commitment: If you read the Institute pages and decide to run the discipline yourself, that's a successful outcome. If you decide running it well requires senior accountability, the retainer is one number — $15k/month, month-to-month, no tier negotiation.
03
AI is leverage, not replacement.
A senior QA team multiplied by AI-augmented authoring, eval running, bug triage, and release-gate work delivers what a 3-person in-house function would, at a third of the cost. That's the engine that makes one flat $15k/month number competitive against a $400–600k/year fully-loaded team.
But AI is leverage, not the worker. The accountable mind on the call when something breaks in production is human. The judgment about whether an L4 incident is severe enough to escalate is human. The decision to drop an eval threshold because legal cleared a new use case is human. Gloxx is built so that every decision happens with a person on the other end — leveraged by AI, but not replaced by it.
This is a constraint on how Gloxx engagements are structured. There is no automated retainer, no hands-off subscription, no AI agent that approves your release without a human in the loop. The retainer means a senior QA lead is doing the work, with AI as a force multiplier, accountable on the calendar and on the call.
The commitment: If a Gloxx engagement ever ships an audit report, a release-gate decision, or a refuse-policy review without a named human signing off, that's a defect in our work — not a feature.
The doctrine, in one line
Audit. Methodology. Leverage. And a person on the call when something breaks.