Skip to main content
Best for: when you own the harness and want the smallest truthful Python integration.
If your app already uses OpenAI direct, OpenAI Agents SDK, Anthropic direct, Claude Agent SDK, LangGraph, LangChain, or MCP, prefer that native adapter page instead of the raw runtime.

Install

pip install verifiedx

Net-new VerifiedX code

This is the real VerifiedX delta in a custom Python harness.
from verifiedx import init_verifiedx

class WorkflowNode:
    def call_model(self, messages):
        ...

    def lookup_internal_workflow(self, workflow_id):
        ...

    def set_workflow_status(self, workflow_id, status):
        ...

vx = init_verifiedx()
node = WorkflowNode()

vx.install_runtime(
    node,
    llm={"call_model": "gpt-5.4-mini"},
    retrievals={"lookup_internal_workflow": "internal workflow evidence"},
    actions={"set_workflow_status": "set_workflow_status"},
)
That is the important part. You keep your existing harness and declare which business methods should be treated as LLM calls, retrievals, memories, tools, or actions.

Two-step binding

If you want to install the Python runtime once and bind harnesses later, use the two-step path:
from verifiedx import init_verifiedx

vx = init_verifiedx()
vx.install_runtime()

vx.bind_harness(
    node,
    llm={"call_model": "gpt-5.4-mini"},
    retrievals={"lookup_internal_workflow": "internal workflow evidence"},
    actions={"set_workflow_status": "set_workflow_status"},
)
Use install_runtime(node, ...) for the fewest lines. Use install_runtime() plus bind_harness(...) when your process owns multiple nodes or you want runtime patching to happen earlier in startup.

What the runtime already captures by default

When you call install_runtime(), VerifiedX already turns on lower-seam capture for common real-world side effects, including:
  • File writes through builtins.open
  • Async file writes through aiofiles
  • HTTP and network calls through urllib, requests, and httpx
  • Provider and runtime capture for OpenAI, Anthropic, and LiteLLM
  • AWS runtime client calls through botocore
  • Database mutations through sqlite3, psycopg, psycopg2, and SQLAlchemy
  • Queue publishes through kombu
  • Browser client actions through Playwright
That means you do not need to enumerate every low-level side effect for VerifiedX to start working. Use explicit method binding when you want your own business methods to become the named boundary.

When to bind methods explicitly

Explicit binding is how you promote your own harness methods into first-class VerifiedX boundaries. That lets you choose exactly which methods you want to name and preflight under:
  • llm
  • retrievals
  • actions
  • memories
  • tools
This is useful when you want:
  • A business-level boundary like set_workflow_status instead of only the lower-seam requests.patch
  • A memory boundary like remember_customer_preference instead of only the underlying storage write
  • Better receipts with your own tool_name
  • Optional schema and docstring metadata on the boundary
  • Protection for custom in-process methods that might not otherwise hit an auto-patched lower seam

Binding categories

Use the smallest surface that matches what the method really does.

llm

Declare model-call methods here. You can use a dict or the string shorthand.
llm={"call_model": {"model_name": "gpt-5.4-mini"}}
llm={"call_model": "gpt-5.4-mini"}

retrievals

Declare reads and lookups that provide context to the model here. Internal reads can use the string shorthand:
retrievals={"lookup_internal_workflow": "internal workflow evidence"}
For public-web or weaker outside context, set object_type to external_retrieval:
retrievals={
    "search_public_web": {
        "query": "search public web",
        "object_type": "external_retrieval",
    },
}
You can also use the tuple shorthand:
retrievals={
    "search_public_web": ("search public web", "external_retrieval"),
}

actions

Declare high-impact side effects here, such as record mutations, system changes, internal or external messages, webhooks, and other writes.
actions={
    "set_workflow_status": {
        "tool_name": "set_workflow_status",
    },
    "send_external_email": {
        "tool_name": "send_external_email",
    },
}
String shorthand also works:
actions={
    "set_workflow_status": "set_workflow_status",
    "send_external_email": "send_external_email",
}
Each declared action gets an action_execute preflight before it runs.

memories

Declare durable memory writes here.
memories={
    "remember_customer_preference": {
        "tool_name": "remember_customer_preference",
    }
}
String shorthand also works:
memories={"remember_customer_preference": "remember_customer_preference"}
Each declared memory method gets a memory_write preflight before it runs.

tools

tools is still supported for general tool wrapping and tool history.
tools={"sync_customer_crm": {"tool_name": "sync_customer_crm"}}
String shorthand also works:
tools={"sync_customer_crm": "sync_customer_crm"}
If a method is definitely a durable memory write or a high-impact side effect, prefer memories or actions. Use tools for general helper wrapping when the real protected boundary may still be caught lower in the runtime.

Optional metadata

You can attach schema and docstring metadata to actions, memories, and tools bindings for better boundary context and better receipts.
actions={
    "set_workflow_status": {
        "tool_name": "set_workflow_status",
        "docstring": "Update an internal workflow record.",
        "schema": {
            "type": "object",
            "properties": {
                "workflow_id": {"type": "string"},
                "status": {"type": "string"},
            },
            "required": ["workflow_id", "status"],
        },
    }
}

Composed systems

If this node is part of a larger multi-agent or agent+human workflow, pass upstream context into VerifiedX so the current node has better system and situational awareness before it takes a high-impact action. This is useful when a supervisor agent, parent workflow, or human reviewer already has context that the current node should use before taking action. VerifiedX does not require a fixed schema for this. Pass the upstream context you already have in any JSON-serializable shape.
upstream = {
    "source": "workflow_supervisor",
    "workflow_id": "WF-2203",
    "approval_status": "approved_with_follow_up",
    "human_review": {
        "reviewer": "ops_lead",
        "result": "approved",
    },
    "prior_agent_output": {
        "summary": "Billing verification is complete.",
    },
}

with vx.with_upstream_context(upstream):
    node.set_workflow_status("WF-2203", "awaiting_human")
Upstream context is supporting workflow context from outside the current node. It is not proof that this node already executed any local action.

What to expect at runtime

Protected boundaries in the raw Python runtime can return:
  • allow
  • allow_with_warning
  • replan_required
  • goal_fail_terminal
Every outcome includes a structured decision receipt. In the production-style raw Python scenarios, VerifiedX exercises:
  • allow for grounded memory writes and internal handoff writes
  • allow_with_warning when public-web context is supplementary but still relevant to a safe internal next step
  • replan_required when an unsafe external email is blocked and the same goal continues through a safer internal Slack update
If a boundary is replanned, the bad side effect does not execute. The agent gets loopback guidance and a decision receipt so it can continue safely or route the receipt upstream when needed.

Pricing note

One protected action check equals one real boundary preflight. Taint, event ingest, execution reports, and decision reads are all included at that price. The Free Sandbox includes the raw runtime and every native adapter. VerifiedX does not replace your orchestrator or human workflow. It returns receipts your system can keep local, route downstream, or pass upstream.