Short answers to the things people want to know before a first conversation. If your question isn't here, ask Moss on the homepage or drop us a note.
We'd rather answer your question honestly than guess at a polished answer. If something here is wrong, out of date, or just not useful — tell us and we'll fix it.
Usually within 2–3 weeks of a signed scoping-sprint agreement — sometimes faster if diaries align. Our team spans US-Midwest and UK business hours, so most working days someone's available within a few hours of your message. The first week is paperwork and access; the useful work starts the moment we've got the access we need.
A 2-week scoping sprint is almost always the starting point. It produces a working pilot, a written plan for the next stage, and enough of a commercial picture to decide whether to keep going. Most clients move straight into a 4-8 week production pilot afterwards; some don't, and that's fine.
The scoping sprint is fixed-fee. Production pilots and full deployments are priced by shape and integration surface — there is no rate card because every engagement is different. Pricing conversations are an actual conversation, not a form. Grab 15 minutes, or drop a note and we'll set up a virtual coffee — whichever suits you.
That's what the scoping sprint is for. Two weeks, fixed fee, working output — designed so you can stop there without losing face or momentum. Stopping at the sprint is a real option we design for, not a worst case.
No. We build, we harden, we hand over the running system. If you do have a data-science function we will happily work with them, but we do not require you to have one — the whole point is that governed AI should run without a specialist minder.
It means we care about reliability, auditability, and the integration plumbing far more than the model. The model matters, but it's one component of a system that also includes governance, logging, routing, policy enforcement, and rollback. Boring is a feature — it's the difference between AI that works on a demo day and AI that works at 4pm on a Friday under audit.
Yes, on two tracks. Specialty insurance is our beachhead — we built our governance expertise there, and that's where our deepest domain reach sits. Small and medium businesses are the parallel track we're currently shaping — same operating-system spine, shaped for firms our own size, plugged into what SMBs actually run (HubSpot, Xero, Microsoft 365, custom apps, and legacy DB2 / SQL Server / Databricks). What we deliberately don't take on: asset management, retail banking, consumer fintech, healthcare, public sector — not our domain expertise yet.
Copilot is a productivity layer on Microsoft apps. What we build is an operating layer across your business systems — the legacy platforms, the APIs, the workflows that don't live inside Microsoft 365. We're not a competitor to Copilot and our Command product works inside Teams and as a Copilot extension; we're what runs on the other side of the conversation when you ask something that touches a real business system.
Rarely, and only inside clearly-defined policy boundaries. Most of what we build is internal — tools used by underwriters, claims handlers, ops teams, and administrators. Consumer-facing AI has a different risk profile and usually benefits from a different kind of firm.
No. We ship. Strategy decks without a path to production are what we're trying to be the opposite of.
You own the data, the contracts we build for your processes, and the governed business logic. We own our underlying products (Command, Coherence, Atlas, Mesh, ISAAC). Models themselves are operated by the model providers — we don't train our own foundation models. The IP split is spelled out explicitly in the engagement paperwork; there are no surprises buried in an appendix.
In your systems, in the right region — and it stays there. Our governance gateway processes data in-flight and never persists your business data. Data residency follows your clients: UK data centres for EU / UK engagements, US data centres for Americas engagements. What we do store is scrubbed telemetry — only the metadata needed to trace an action back to your source systems for audit. GDPR-sensitive fields and business content are redacted before anything is logged.
No. Our contracts with model providers explicitly prohibit using data that flows through our pipelines for training or improvement of their models. This applies whether we're routing to Claude, OpenAI, or any open-source model we deploy — if a provider changes that policy, we switch or stop using them.
Every current LLM hallucinates sometimes — no vendor (including us) can promise otherwise. What we can promise is the system around the model: four layers that catch errors before they cause damage. Policy layer stops actions that shouldn't happen regardless of what the model suggested. LLM-as-judge plus agent-to-agent ratification cross-checks consequential answers against policy before anything downstream moves — cross-checking is cheap, being wrong silently is expensive. Audit layer records exactly what was said and done, so you can inspect the chain when something looks off. Rollback layer lets you reverse actions that slipped through. Ultimate responsibility for the deployed decision sits with you — our job is to build the system that makes that responsibility manageable. We can't eliminate hallucinations; we can narrow the blast radius when one happens.
Every action an agent takes is logged in an immutable trail — system called, parameters used, decision made, outcome, timestamp, and the human who authorised it. That log is queryable by your compliance team and exportable for regulators. We don't claim specific certifications we don't hold; we design to the same standards auditable financial-services systems are built to.
We're model-agnostic — we pick the best one for the job, and we switch when a better one shows up. That usually means Claude (Anthropic) or one of the OpenAI models, plus curated open-source options when latency, cost, or data-locality argue for it. We deliberately avoid models whose training, hosting, or data-handling provenance we can't verify to the standard our regulated clients need. Our gateway routes per-workflow, not per-vendor, so your stack isn't locked to any single model provider. And if you'd rather use your own model accounts — or have existing enterprise agreements with a provider — we can route through those directly; see the next question.
Yes. By default we route model calls through our own vendor accounts — simpler for procurement, one contract to sign, no model-billing relationship for you to manage. But if you'd rather use your own Anthropic, OpenAI, or other enterprise model agreement, we can configure our gateway to route through your account instead. You get direct billing from the provider, your own negotiated rates, and your own vendor-level audit trail. The governance, policy, and audit layer we sit in front of the model stays the same either way.
We treat every model version as a dependency, not a feature. New versions go into a parallel environment and we evaluate on the workload that matters to you before anything changes in production. If a new model wins on your use case, we switch; if not, we don't. Either way, it's a managed change — clients aren't surprised by model behaviour shifting underneath them.
That's the normal state of the world, not an exception. Our gateway connects to whatever you have now, and when you migrate we reconnect to whatever you have next — the contracts, audit trail, and governance move with you. Mid-migration is sometimes actually the best time to engage, because it forces a clean conversation about which integrations are worth rebuilding and which aren't.
Almost certainly yes, if it has an API or a defined data export. We've connected to the obvious enterprise systems (ERP, CRM, policy admin, claims platforms) and the specialty-insurance ones (placing platforms, broker management systems, bordereaux tooling). If you're using something genuinely obscure, the honest answer is: tell us what it is and we will say yes or no on the call.
No. Our governance gateway calls your existing systems via their existing APIs or data exports. We do not require code changes on your side — if a workflow needs specific plumbing, we build it on our side of the boundary.
No. We augment them. The model your actuaries signed off on stays in place; what we add is the layer around it — triage, routing, enrichment, audit, and the ability to use the model at the speed of modern workflows. Replacing validated pricing models is a regulated act we wouldn't take on lightly.
Completely normal — and often the best signal that a real conversation is worth having. Stalled pilots usually teach us more about what actually matters in your environment than a greenfield discovery would. Bring the post-mortem to the call if you have one.
Three founders plus advisors, operations and support staff, amplified by a mesh of AI agents we run ourselves — small by design, scalable by architecture. Succession planning is part of every engagement: contracts, documentation, and governance artefacts are authored so another engineer (ours or yours) can pick up the system. The people who built it are the people supporting it; if that changes, the handover is a defined process, not a scramble.
London headquarters, with a distributed core team across the UK and US-Midwest. Most client work is hybrid — video for weekly cadence, in-person for the sessions that actually benefit from a whiteboard.
Reference calls happen under mutual NDA and only with the client's explicit consent, so not instantly — but yes, in the right circumstances. The team can arrange it once an engagement is realistic enough that a reference call is a serious part of the decision.
Occasionally, and deliberately slowly. The best route is a short note to hello@synapsedx.ai with a paragraph on what is drawing you to us specifically — we read everything.
Still have a question? Ask Moss on the homepage, or email hello@synapsedx.ai.