Building block

Policies: your rules as code, enforced at runtime

Policies turn your acceptable-use guidelines, data-handling requirements, and regulator commitments into executable rules. Applied on every message, every tool call, every agent step. Before the model, not after.

Enforcement stack

Five gates, one direction: safe

A request doesn't reach a model unless it cleared every gate. A response doesn't reach a user unless it did too.

Incoming request
L1

Identity & role

authn/authz

Who is asking? Verified via SSO, scoped to their Space and role.

L2

Content policy

content

Block-lists, personal-data detection, redaction, category filters applied both to input and output.

L3

Data residency

residency

EU data stays in EU-region models. NL-only datasets route to NL-only providers.

L4

Model whitelist

models

Only pre-approved models allowed per Space. Fine-tuned judgments for high-stakes work.

L5

Audit seal

evidence

Every allow or deny is logged with inputs, policy version, and the matching rule.

Allowed → model / tool
Why runtime

Guardrails aren't a training thing

Enforced at the edge

Rules fire before the request leaves your governed boundary. Not in a post-hoc classifier that might have already leaked data.

Versioned and reviewable

Policies live in Git-like history. Every change is reviewed, dated, and tied to the person who made it.

Fast enough to not matter

Sub-50 ms in the hot path. Policy checks never become the reason people "just use ChatGPT" instead.

What teams encode

Common policies we've shipped

Strip personal data before model call

Replace BSN, IBAN, emails, and phone numbers with tokens. Restore them only for approved recipients.

Block external-only AI for HR

HR Spaces can only use AI deployed in-region with a signed DPA.

Require approval for mass email

An agent wanting to send to >50 recipients must clear a human approval gate first.

Flag prompts that risk unreliable answers

Low-confidence answers on regulated topics get a "citation required" flag and a reviewer.

Bring your policy, see it enforced in 30 minutes.

Send us one rule from your AI acceptable-use policy. We'll encode it live on the call and show you the audit entry.