Platform capability

Understands intent before picking a model

Every message passes through an intent classifier first. The Platform works out what you actually need. A quick answer, a calculation, a search, an action, a long-form analysis. And plans from there. No model is called until the plan is sound.

Intent pipeline

From prompt to plan

The Platform treats a prompt like a request, not a riddle. It classifies, enriches, plans, and only then reaches for a model.

Prompt
01
Classify
What kind of ask is this?
02
Enrich
Attach Space context + history
03
Plan
Which tools? Which steps?
04
Route
Hand off to the right model
Safe plan
Why this matters

Better answers, smaller bills

Right answer, right path

A "what time zone is Berlin in" question doesn't cost a GPT-4 call. A M&A memo doesn't land on a 7B model.

Explicit reasoning

The plan is inspectable. You can see what the Platform thought you meant and correct it if it got it wrong.

Less latency, less spend

Customers see 40-60% reduction in token spend once intent routing is tuned for their Spaces.

Where it pays off

Intent at work in production

FAQ deflection

Simple lookups route to a cached retrieval path. Never touch a foundation model.

Document analysis

Long-context tasks go to a model with the right context window instead of being chunked awkwardly.

Safety-critical asks

Anything classified as regulated-advice gets the citation-required template before model selection.

Action requests

"Send", "schedule", "transfer" trigger the approval-aware path with guaranteed audit.

See intent routing on your own prompts.

Bring a dozen real prompts from your org. We'll run them through the intent classifier live and show you the plan each one produces.