How we work.

Most AI engagements fail in the shape of the contract, not the shape of the technology. We do it in reverse.

Most AI engagements fail in the shape of the contract, not the shape of the technology. A vendor pitches transformation, the buyer signs a six-month proposal based on a slide deck, and everyone discovers the real problem somewhere around month three. We do it in reverse: a short, fixed-fee scoping sprint that produces a working pilot and a written plan, before anyone commits to a bigger engagement.

This page describes how that works — start to finish, honestly.

The 2-week scoping sprint

A scoping sprint is two weeks. Fixed scope, fixed fee, fixed end date. Our founding engineers pair with one or two of your people — usually a product or ops lead plus whoever holds the P&L for the workflow we're looking at. We ask for around four hours a week of your time — not for you to do build work, but to iterate with us on what's in your head. Scoping sprints are how we close the gap between your mental model and something deployable; that gap only shrinks when you're in the conversations. Mostly short working sessions, plus access to one real workflow we can actually dig into.

What you get at the end of the two weeks:

  • A deployable pilot. Not a mock-up, not a slide, not a Figma. Something running in a governed sandbox that connects to a real (or realistically-stubbed) system and completes a real task end-to-end. You can watch it work.
  • A written next-stage plan. Scope, risks, the integrations that turned out to matter, the ones that didn't, a rough shape for the production pilot, and where the governance boundaries sit. Plain English.
  • A decision rationale. Every meaningful call we made during the sprint — why we chose this workflow over that one, why this architecture over the obvious one, what we'd do differently next time — written down so you can argue with it later, not just trust it.

We price the sprint as a fixed fee because we think proposal-based pricing warps incentives on both sides. A fixed fee means we're paid to produce the output, not to run down the clock. It also means you can say no to the next stage without feeling like you've wasted a procurement cycle.

The numbers live in the commercial conversation, not on a page. If you want a sense of the shape, 15 minutes with the team is usually enough.

How a real engagement flows

Scoping → production pilot → full deployment → steady-state support. You can stop at any handover, and we design the sprint so stopping is a real option, not a threat.

  • Discovery is a 30-minute conversation. We work out whether the problem you've described is the one you actually have, and whether it's something we can usefully help with. When the fit isn't right we'll say so — and where we can, point you at someone who can.
  • Scoping sprint is the two weeks above.
  • Production pilot typically runs 4-8 weeks. We harden the sprint output, wire it into the real systems it needs, and put it in front of real users with real data and a real audit trail. The pilot has a defined exit criterion — not "it feels good" but a measurable thing you can point to.
  • Full deployment runs from 12 weeks upwards depending on integration surface. This is where the work becomes less about the AI and more about change management — routing rules, supervisor training, the governance policies you want enforced, the escalation paths when the model gets it wrong.
  • Steady-state is whatever ongoing support and evolution looks like once the thing is live. We prefer small retained teams over big anonymous delivery organisations; it keeps the people who built the system close to the people using it.

The handover between scoping and delivery is a single document and a single conversation. Same engineers, same product, same governance boundary — continuity is the point.

What “governed AI” actually means

Governance is the reason we exist and the thing most AI pitches skip. Concretely, it means four things:

  • An immutable audit trail. Every action an AI agent takes — every system it called, every decision it made, every piece of data it touched — is logged in a way that can't be retrospectively edited. Your auditor and your regulator can reconstruct what happened, when, and why. That's Coherence, our governance gateway.
  • Policy at the boundary, not in prompts. Rate limits, approval thresholds, which systems the agent can touch, which it can't, who has to sign off when — these live as enforceable policy, not as please-and-thank-you instructions buried in a system prompt. Prompts are suggestions. Policy is enforced.
  • Versioned contracts for side-effect surfaces. Anywhere our agents send outbound email, write to your CRM, or touch external systems, the interaction operates under a versioned Atlas contract — schema + policy + audit trail in one reviewable artefact. Changing a rule bumps the contract to a new version and creates an audit record. It's the same pattern we'd recommend you adopt for your own agents, and it's the pattern our regulated clients use as the control-surface their PRA / FCA / equivalent regulators look at.
  • A rollback path. Every action is reversible, or has a clearly-marked point at which reversibility stops. When an AI-driven decision turns out to be wrong, there's a defined way to undo it and a defined way to make sure it doesn't happen again.
  • Explicit data boundaries. Your business data is processed in-flight by our agents but never persisted by us — it stays in your systems. Data flowing through our pipelines is contractually prohibited from being used to train vendor models. The only thing we store is scrubbed telemetry: enough metadata to trace an action back to your source systems for audit, nothing more. GDPR-sensitive fields and business content are stripped before anything touches our logs.

Governance should be invisible when everything is working and obvious the moment something isn't. That's the bar.

Two markets, same spine

Our beachhead is specialty insurance — carriers, brokers, Lloyd's-market infrastructure — because regulation is the hard part and we've made it easy. That expertise earns us the right to operate in adjacent regulated industries as we expand.

We're also building a small-and-medium-business offering — the same operating-system spine, shaped for firms our own size. Enterprise-grade governance, audit trail, and workflow automation at a price a small business can actually adopt. Agents connecting to the tools SMBs already run: HubSpot, Xero, the Microsoft 365 stack, bespoke applications, and the traditional systems nobody talks about on stage but everyone still runs on (DB2, SQL Server, Databricks). The technology is the same; the packaging is different.

If you run a firm that's too small for a Big-Four rollout but too serious to ignore governance, keep an ear out — this track is shaping up for you.

Small by design, scalable by architecture

When you engage us, the people scoping the work are the people building it — three founders plus advisors, amplified by the same agent mesh we sell our clients. We don't stack juniors between you and the person making decisions; agents do the work that doesn't need human judgement (integration boilerplate, documentation, compliance housekeeping), humans do the work that does. That's the point of governed AI done right — it compounds the team, not replaces it.

Capacity scales with the architecture, not with headcount. That's by design, not by accident.

Why we start small

We'd rather ship a small thing that works than a big thing that might. Starting with one workflow, one team, one measurable outcome means you can actually tell whether AI is helping — and why. Once one thing is running cleanly, overlaying the next is faster, cheaper, and the governance surface you've already built carries over. The cost is front-loaded; the value compounds.

New use cases layer on the same spine instead of starting fresh each time. That's the advantage of a platform approach over a project approach — each engagement leaves behind infrastructure the next one can use.

What happens next

If this shape sounds right, the next step is a 15-minute call with the team. No slides, no pre-read, just a conversation about whether there's a real problem here we can usefully help with. If we can't help, we'll say so and — where we can — point you at someone who can.

Book it here, or drop a note to hello@synapsedx.ai.

Ask Moss about our approach