Skip to main content
Best for: teams using the OpenAI SDK directly with native tool loops.

Install

pip install verifiedx
This page assumes your app already uses openai in Python or openai in TypeScript.

Net-new VerifiedX code

This is the actual VerifiedX delta in an existing OpenAI tool loop.
from verifiedx import attach_openai, create_openai_tool_dispatcher, init_verifiedx

verifiedx = init_verifiedx()
client = attach_openai(client, verifiedx=verifiedx)
dispatcher = create_openai_tool_dispatcher(
    verifiedx=verifiedx,
    tools=TOOL_DEFINITIONS,
    tool_handlers=TOOL_HANDLERS,
)

# Inside your existing loop:
tool_outputs = dispatcher(result, surface="responses")  # or surface="chat"
That is the important part. The rest of the example is your normal OpenAI client, native tools definitions, tool handlers, and loop code that you likely already have.
Your native tool surface is the config. VerifiedX uses your existing tool names, descriptions, schemas, and native OpenAI tool loop as the source of truth for what to preflight. If you want explicit actions and memories dictionaries, use the raw runtime instead.

Typical setup

Keep your existing OpenAI client and native OpenAI tool definitions.
from openai import OpenAI
from verifiedx import attach_openai, create_openai_tool_dispatcher, init_verifiedx

TOOL_DEFINITIONS = [
    {
        "type": "function",
        "function": {
            "name": "lookup_workflow",
            "description": "Look up an internal workflow before changing state.",
            "parameters": {
                "type": "object",
                "properties": {"workflow_id": {"type": "string"}},
                "required": ["workflow_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "set_workflow_status",
            "description": "Update internal workflow status.",
            "parameters": {
                "type": "object",
                "properties": {
                    "workflow_id": {"type": "string"},
                    "status": {"type": "string"},
                    "reason": {"type": "string"},
                },
                "required": ["workflow_id", "status", "reason"],
            },
        },
    },
]

TOOL_HANDLERS = {
    "lookup_workflow": lambda payload: {"ok": True, "workflow": payload},
    "set_workflow_status": lambda payload: {"ok": True, "workflow_updated": payload},
}

verifiedx = init_verifiedx()
client = attach_openai(OpenAI(), verifiedx=verifiedx)
dispatcher = create_openai_tool_dispatcher(
    verifiedx=verifiedx,
    tools=TOOL_DEFINITIONS,
    tool_handlers=TOOL_HANDLERS,
)
Do not use raw install_runtime(...) or bindHarness(...) for this path. Keep the native OpenAI client and native tools payload you already have.

Composed systems

If this OpenAI tool loop is part of a larger multi-agent or agent+human workflow, pass upstream context into VerifiedX so the current run has better system and situational awareness before it takes a high-impact action. This is useful when a supervisor agent, parent workflow, or human reviewer already has context that the current run should use before taking action. VerifiedX does not require a fixed schema for this. Pass the upstream context you already have in any JSON-serializable shape.
upstream = {
    "source": "workflow_supervisor",
    "workflow_id": "WF-2203",
    "approval_status": "approved_with_follow_up",
    "human_review": {
        "reviewer": "ops_lead",
        "result": "approved",
    },
    "prior_agent_output": {
        "summary": "Billing verification is complete.",
    },
}

with verifiedx.with_upstream_context(upstream):
    response = client.responses.create(
        model="gpt-5.4-mini",
        instructions="Use tools instead of prose for operational work.",
        input=[{"role": "user", "content": "Update WF-2203 to awaiting_human."}],
        tools=TOOL_DEFINITIONS,
    )

    tool_outputs = dispatcher(response, surface="responses")
Upstream context is supporting workflow context from outside the current run. It is not proof that this run already executed any local action.

Responses API

Use the same tool definitions and dispatch the selected tool calls back into the Responses loop.
response = client.responses.create(
    model="gpt-5.4-mini",
    instructions="Use tools instead of prose for operational work.",
    input=[{"role": "user", "content": "Update WF-1002 to awaiting_human."}],
    tools=TOOL_DEFINITIONS,
)

tool_outputs = dispatcher(response, surface="responses")

if tool_outputs:
    response = client.responses.create(
        model="gpt-5.4-mini",
        previous_response_id=response.id,
        input=tool_outputs,
        tools=TOOL_DEFINITIONS,
    )

Chat Completions

The same dispatcher also works for Chat Completions.
completion = client.chat.completions.create(
    model="gpt-5.4-mini",
    messages=[
        {"role": "developer", "content": "Use tools instead of prose for operational work."},
        {"role": "user", "content": "Update WF-1002 to awaiting_human."},
    ],
    tools=TOOL_DEFINITIONS,
    tool_choice="auto",
)

tool_outputs = dispatcher(completion, surface="chat")

Async handlers and streaming

If your Python tool handlers are async, use:
tool_outputs = await dispatcher.async_dispatch(response, surface="responses")
In TypeScript, the dispatcher is already async. TypeScript also patches the native stream surfaces:
  • client.responses.stream(...)
  • client.chat.completions.stream(...)
That keeps prompt context, tool selections, and final outputs in VerifiedX while preserving the native stream shape.

What the adapter already captures

VerifiedX keeps the native OpenAI surface intact and uses your existing tool definitions as the source of truth. On both SDKs it captures:
  • client.responses.create(...)
  • client.chat.completions.create(...)
  • Lookup and read tools as support inputs in run history
  • High-impact tool boundaries before the handler runs
  • Durable memory writes, record mutations, system changes, and external messages inferred from tool name, schema, and description
  • Native OpenAI Responses items such as function_call, custom_tool_call, shell_call, local_shell_call, mcp_call, mcp_list_tools, and MCP approval items
TypeScript also handles additional native Responses command items such as:
  • apply_patch_call
  • computer_call
  • file_search_call
  • web_search_call
The direct adapter also runs with the normal runtime lane underneath, so lower-seam coverage stays on as well.

What to expect at runtime

Tool boundaries can return:
  • allow
  • allow_with_warning
  • replan_required
  • goal_fail_terminal
Every outcome includes a structured decision receipt. If a tool is replanned, the side effect does not execute. The dispatcher returns the blocked result in the native OpenAI tool-output shape so the loop can continue safely.

Production-style validation coverage

The direct OpenAI validation paths in this repo cover real workflows including:
  • Clean memory writes
  • Clean system changes
  • Multi-step internal operations
  • External email attempts that replan into safer internal Slack updates
  • Adversarial external-email attempts that should not keep pushing the same unsafe action
  • Chat Completions record-mutation flows
  • Responses continuation turns that preserve prior tool history

Pricing note

One protected action check equals one real boundary preflight. Taint, event ingest, execution reports, and decision reads are all included at that price. The Free Sandbox includes every language, provider, framework, and adapter. VerifiedX does not replace your orchestrator or human workflow. It returns receipts your system can keep local, route downstream, or pass upstream.
For the raw runtime reference, see the Python SDK and TypeScript SDK. For the OpenAI Agents SDK surface, see the OpenAI Agents SDK page.