Skip to main content
Best for: teams using LangGraph with Claude models through langchain_anthropic or @langchain/anthropic.

Install

pip install verifiedx

Net-new VerifiedX code

This is usually the only VerifiedX delta in an existing Claude-backed LangGraph setup.
from langchain_anthropic import ChatAnthropic
from verifiedx import init_verifiedx, install_langgraph

vx = init_verifiedx()
install_langgraph(verifiedx=vx)

model = ChatAnthropic(model="claude-sonnet-4-6", temperature=0).bind_tools(TOOLS)

graph = builder.compile()
That is the important part. The rest of your LangGraph graph stays the same: your nodes, edges, ToolNode or manual tool loop, store, checkpointer, and state schema do not need to be redesigned.
Do not use the Anthropic direct adapter here. If LangGraph owns the graph and tool loop, use the LangGraph adapter.
Start with the graph that owns the highest-impact tool call or state update. Claude-backed LangGraph protection is broad once installed, but rollout can still be incremental.
Your native tool surface is the config. VerifiedX uses your existing tool names, descriptions, schemas, and LangGraph state/store surfaces as the source of truth for what to preflight.

Typical graph shape

A normal Claude-backed LangGraph graph still looks like a normal LangGraph graph.
from langchain_core.messages import SystemMessage
from langgraph.graph import END, START, MessagesState, StateGraph

def assistant(state: MessagesState):
    response = model.invoke([SystemMessage(content=SYSTEM_PROMPT), *state["messages"]])
    return {"messages": [response]}

def route_after_assistant(state: MessagesState):
    last = state["messages"][-1]
    return "run_tools" if getattr(last, "tool_calls", None) else END

builder = StateGraph(MessagesState)
builder.add_node("assistant", assistant)
builder.add_node("run_tools", run_tools)
builder.add_edge(START, "assistant")
builder.add_conditional_edges("assistant", route_after_assistant, {"run_tools": "run_tools", END: END})
builder.add_edge("run_tools", "assistant")

graph = builder.compile()

If you manually dispatch tools in Python

If your Python graph runs tools manually instead of using ToolNode, catch BoundaryDeniedError and replay exc.tool_result() back into the graph as a normal tool result.
from verifiedx import BoundaryDeniedError

try:
    result = TOOL_MAP[tool_name].invoke(tool_args)
except BoundaryDeniedError as exc:
    result = exc.tool_result()
That keeps the Claude tool loop moving safely instead of crashing the graph.

Composed systems

If this Claude-backed LangGraph run is part of a larger multi-agent or agent+human workflow, pass upstream context into VerifiedX so the current graph run has better system and situational awareness before it takes a high-impact action. This is useful when a supervisor agent, parent workflow, or human reviewer already has context that the current graph run should use before taking action. VerifiedX does not require a fixed schema for this. Pass the upstream context you already have in any JSON-serializable shape.
upstream = {
    "source": "workflow_supervisor",
    "workflow_id": "WF-2203",
    "approval_status": "approved_with_follow_up",
    "human_review": {
        "reviewer": "ops_lead",
        "result": "approved",
    },
    "prior_agent_output": {
        "summary": "Billing verification is complete.",
    },
}

with vx.with_upstream_context(upstream):
    result = graph.invoke(
        {
            "messages": [
                {
                    "role": "user",
                    "content": "Inspect workflow WF-2203 and then update it to awaiting_human.",
                }
            ]
        }
    )
Upstream context is supporting workflow context from outside the current run. It is not proof that this run already executed any local action.

What the adapter already captures

On this path, VerifiedX captures the native LangGraph surfaces and keeps provider attribution truthful for Claude-backed runs. That includes:
  • StateGraph.compile
  • Graph execution through invoke, ainvoke, stream, and astream where available
  • Selected tool history from AIMessage.tool_calls
  • Tool execution through LangChain tool surfaces and ToolNode
  • Direct state mutations through Command.update
  • Direct compiled state mutations through graph.update_state or graph.updateState
  • Store reads through store.get and store.search
  • Store writes through store.put and store.delete
When Anthropic-style tool call IDs are present, VerifiedX records provider: "anthropic" while keeping framework: "langgraph".

What gets preflighted

VerifiedX preflights the high-impact LangGraph boundaries directly. That includes:
  • Wrapped tool executions
  • store.put
  • store.delete
  • Command.update
  • Compiled graph.update_state or graph.updateState
Claude-selected tools seen first in AIMessage.tool_calls are reused at execution time, so VerifiedX does not send duplicate preflight requests for the same boundary.

What to expect at runtime

Protected LangGraph boundaries can return:
  • allow
  • allow_with_warning
  • replan_required
  • goal_fail_terminal
Every outcome includes a structured decision receipt. If a tool, store write, or state update is replanned, the side effect does not execute. VerifiedX returns the blocked result or loopback guidance so the graph can keep moving toward the same goal safely.

Production-style validation coverage

The Claude-backed LangGraph validation paths in this repo cover real graph flows including:
  • Clean memory writes
  • Clean record mutations
  • Clean internal workflow-state changes
  • Multi-step internal runs across retrieval, memory, mutation, and internal notification
  • External email attempts that replan into safer internal Slack updates
  • Repeated adversarial external-email attempts that should not keep pushing the same unsafe action

Pricing note

One protected action check equals one real boundary preflight. Taint, event ingest, execution reports, and decision reads are all included at that price. The Free Sandbox includes every language, provider, framework, and adapter. VerifiedX does not replace your orchestrator or workflow. It returns receipts your system can keep local, route downstream, or pass upstream.
For the provider-agnostic graph adapter, see LangGraph. For the direct Anthropic adapter outside LangGraph, see the Anthropic SDK pages.