Skip to main content
VerifiedX gives you peace of mind that your autonomous agents can keep doing useful work without taking the kind of high-impact bad action that can hurt your business, your users, or your operations. It checks whether a meaningful agent action or durable memory write is justified by trustworthy evidence before it happens. The core question is not just “is this suspicious?” It is: why is the agent about to do this exact thing with these exact parameters, and should that side effect happen? It sits at the action boundary. Your runtime, orchestrator, prompts, tools, and memory stay the same. VerifiedX observes the run, preflights protected side effects before they land, and returns a machine-readable decision plus a decision receipt your system can act on.

What it does

Once integrated, VerifiedX automatically observes the runtime context it needs and preflights protected side effects such as:
  • Memory writes
  • Record mutations
  • System changes
  • External messages
  • Internal notifications, handoff notes, and workflow updates
  • Webhook calls
  • File writes
  • Database mutations
  • Queue publishes
  • Delegation and handoffs
This is about both adversarial and non-adversarial failures. Prompt injection is one cause, but so are weak grounding, stale context, drift, overeagerness, and retrying the wrong action without a meaningful new basis.

What to expect from a decision

When a protected side effect is about to happen, VerifiedX returns one of four outcomes:
  • allow — the action is sufficiently justified and can proceed.
  • allow_with_warning — the action can proceed, but VerifiedX attaches a meaningful caution and a receipt.
  • replan_required — this exact action should not happen now, but the agent should keep working toward the same goal through a safer or better-grounded next step.
  • goal_fail_terminal — last resort. VerifiedX uses this only when the goal itself depends on a clearly forbidden, unsafe, malicious, or illegitimate action and there is no safe path forward.
Every outcome still comes with a structured decision receipt, reasons, and next-step guidance.

What happens when an action is wrong

VerifiedX is optimized to stop the bad action, not stop the agent. If the action itself is wrong but the goal is still valid, VerifiedX returns a replan decision with reasons, what not to do, and safe next steps so the system can keep moving toward the same goal state. In single-agent systems, that replan stays local first: the runtime loopback gives the same agent the decision, the reasons, and the next-step guidance so it can continue without executing the bad action. In composed systems, VerifiedX first gives the current node a chance to recover locally. If the blocker is still unresolved, the decision receipt can route that replan upstream so your orchestrator knows this stage needs an upstream fix before retry. goal_fail_terminal is the complete last resort. Even then, VerifiedX still returns a structured decision receipt with exact reasons, routing, and resume semantics so your system can fail truthfully and hand the situation back to a human or upstream workflow cleanly.

Why builders use it

  • Few lines of code
  • Machine-readable decision receipts
  • Clear reasons and next steps
  • Decision logs, usage, keys, and billing in the builder app
  • Strong bias toward continuing the workflow safely instead of dead-ending it

What VerifiedX is not

  • Not an orchestrator
  • Not a replacement for your runtime
  • Not a memory system or workflow engine
  • Not just post-hoc logging
  • Not a generic AI safety, observability, or framework-rewrite product
VerifiedX adds a preflight boundary around the moments that can actually change the world.

Supported runtimes and adapters

SurfaceLanguages
Raw runtimePython, TypeScript
OpenAI directPython, TypeScript
OpenAI Agents SDKPython, TypeScript
Anthropic directPython, TypeScript
Claude Agent SDK / query surfacePython, TypeScript
LangGraphPython, TypeScript
LangChainPython, TypeScript
Vercel AI SDKTypeScript
MCP toolsPython, TypeScript

Free Sandbox

The Free Sandbox includes 250 protected action checks per month.

Next steps

Quick Start

Protect your first boundary in a few minutes.

Python SDK

Use the raw runtime or a native Python adapter.

TypeScript SDK

Use bindHarness(...), native adapters, or lower-seam fallbacks.

How it works

See taint probe, preflight, receipts, execution reporting, and logs end to end.