Skip to main content
Best for: teams already using the Vercel AI SDK and wanting one native VerifiedX adapter surface regardless of model provider.

Install

npm install @verifiedx-core/sdk
This page assumes your app already uses ai. The VerifiedX wrapper surface is the same across supported Vercel model providers.

Net-new VerifiedX code

The Vercel AI SDK integration surface is the same regardless of whether your model is OpenAI, Anthropic, or another provider.
import { initVerifiedX } from "@verifiedx-core/sdk";
import { withVerifiedXGenerateText } from "@verifiedx-core/sdk/vercel-ai";

const verifiedx = await initVerifiedX();
const generate = await withVerifiedXGenerateText(generateText, { verifiedx });

const result = await generate({
  model,
  system,
  messages,
  tools,
});
If you already use streamText(...), wrap that instead:
import { withVerifiedXStreamText } from "@verifiedx-core/sdk/vercel-ai";

const stream = await withVerifiedXStreamText(streamText, { verifiedx });
const result = await stream({ model, system, messages, tools });
If you already use ToolLoopAgent, wrap the agent once:
import { withVerifiedXToolLoopAgent } from "@verifiedx-core/sdk/vercel-ai";

await withVerifiedXToolLoopAgent(agent, { verifiedx });
That is the important part. Your existing Vercel AI SDK tool definitions, onStepFinish, experimental_onToolCallStart, experimental_onToolCallFinish, messages, and provider model wiring stay the same.
Your native tool surface is the config. VerifiedX uses your existing tool names, descriptions, schemas, and approval hints as the source of truth for what to preflight.

Provider examples

The VerifiedX wrapper stays the same. Only the provider model import and model call change.
import { openai } from "@ai-sdk/openai";

const result = await generate({
  model: openai("gpt-5.4-mini"),
  system,
  messages,
  tools,
});

Full example

import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
import { generateText, tool } from "ai";
import { z } from "zod";

import { initVerifiedX } from "@verifiedx-core/sdk";
import { withVerifiedXGenerateText } from "@verifiedx-core/sdk/vercel-ai";

const PROVIDER = process.env.VERCEL_AI_PROVIDER || "openai";
const MODEL = process.env.VERCEL_AI_MODEL || (PROVIDER === "anthropic" ? "claude-sonnet-4-5" : "gpt-5.4-mini");

function providerModel() {
  return PROVIDER === "anthropic" ? anthropic(MODEL) : openai(MODEL);
}

const tools = {
  lookup_workflow: tool({
    description: "Look up an internal workflow before changing state.",
    inputSchema: z.object({
      workflow_id: z.string(),
    }),
    execute: async ({ workflow_id }) => ({
      ok: true,
      workflow: {
        workflow_id,
        status: "pending_review",
      },
    }),
  }),

  set_workflow_status: tool({
    description: "Update internal workflow status.",
    inputSchema: z.object({
      workflow_id: z.string(),
      status: z.string(),
      reason: z.string(),
    }),
    execute: async ({ workflow_id, status, reason }) => ({
      ok: true,
      workflow_updated: { workflow_id, status, reason },
    }),
  }),
};

const verifiedx = await initVerifiedX();
const generate = await withVerifiedXGenerateText(generateText, { verifiedx });

const result = await generate({
  model: providerModel(),
  system: "Use tools instead of prose for operational work.",
  messages: [
    {
      role: "user",
      content: "Inspect workflow WF-1002 and then update it to awaiting_human because billing verification is missing.",
    },
  ],
  tools,
});

console.log(result.text);
Do not use raw bindHarness(...) for this path. Keep the native Vercel AI SDK surface and wrap generateText(...), streamText(...), or ToolLoopAgent directly.

Composed systems

If this Vercel AI SDK run is part of a larger multi-agent or agent+human workflow, pass upstream context into VerifiedX so the current run has better system and situational awareness before it takes a high-impact action. This is useful when a supervisor agent, parent workflow, or human reviewer already has context that the current run should use before taking action. VerifiedX does not require a fixed schema for this. Pass the upstream context you already have in any JSON-serializable shape.
const upstream = {
  source: "workflow_supervisor",
  workflow_id: "WF-2203",
  approval_status: "approved_with_follow_up",
  human_review: {
    reviewer: "ops_lead",
    result: "approved",
  },
  prior_agent_output: {
    summary: "Billing verification is complete.",
  },
};

const result = await verifiedx.withUpstreamContext(upstream, async () => {
  return await generate({
    model: providerModel(),
    system: "Use tools instead of prose for operational work.",
    messages: [
      {
        role: "user",
        content: "Inspect workflow WF-2203 and then update it to awaiting_human.",
      },
    ],
    tools,
  });
});
The same pattern works for streamText(...) and ToolLoopAgent too.
Upstream context is supporting workflow context from outside the current run. It is not proof that this run already executed any local action.

Supported surfaces

The VerifiedX Vercel adapter currently wraps:
  • generateText(...)
  • streamText(...)
  • ToolLoopAgent.generate(...)
  • ToolLoopAgent.stream(...)
It also captures native Vercel lifecycle hooks and payloads around those calls, including:
  • experimental_onToolCallStart
  • experimental_onToolCallFinish
  • onStepFinish
  • toolCalls
  • toolResults
  • UI tool parts from messages[].parts
  • Approval-requested and approval-response tool parts

Provider handling

The adapter surface is provider-agnostic. You use the same VerifiedX wrapper no matter which Vercel model you pass in. VerifiedX currently infers provider attribution from model metadata such as:
  • OpenAI provider values like openai.responses
  • Anthropic or Claude provider values like anthropic or claude
  • Bedrock-flavored provider values containing bedrock
If your model wrapper does not expose provider metadata, VerifiedX still protects the tool loop. The provider label in decision context may just be less specific.

What the adapter already captures

Once attached, VerifiedX keeps the Vercel AI SDK surface intact and captures the native lifecycle around your tool loop. That includes:
  • Prompt and message context for each top-level invocation
  • Fresh VerifiedX run context per top-level invocation
  • Native tool-start history from experimental_onToolCallStart
  • Native tool-finish and tool-result capture from experimental_onToolCallFinish
  • Native step summaries and tool history from onStepFinish
  • Vercel UI tool parts in messages[].parts
  • Approval-requested and approval-response tool parts
  • Approval-capable tools declared with needsApproval
If the same native tool selection appears through both experimental_onToolCallStart and onStepFinish, VerifiedX deduplicates that history instead of double-counting it.

What gets preflighted

VerifiedX wraps your tool execute(...) functions directly and infers the protected boundary from the tool name, description, schema shape, and approval hints. That includes:
  • Memory writes such as tools with namespace, key, and value
  • Record mutations such as customer, CRM, ticket, account, or workflow updates
  • Internal updates such as routing, status, config, flags, metadata, and workflow-state changes
  • Message sends such as Slack, email, replies, notifications, and other communication tools
Lookup-style tools stay in run history as support inputs instead of being treated as mutations. In the Vercel tests, tools like lookup_workflow are recorded as internal_retrieval. If a tool is otherwise ambiguous but marked needsApproval, VerifiedX still treats it as a guarded internal update and records its approval capability in tool metadata.

What to expect at runtime

Protected boundaries can return:
  • allow
  • allow_with_warning
  • replan_required
  • goal_fail_terminal
Every outcome includes a structured decision receipt. If a tool is replanned, the side effect does not execute. The wrapped tool returns the normal VerifiedX blocked result shape, including ok: false, blocked: true, boundary_outcome, safe_next_steps, and decision_receipt, so your agent can keep moving toward the same goal safely. Across the tested Vercel AI SDK paths, VerifiedX is exercised to:
  • infer OpenAI and Anthropic providers correctly from model configuration
  • wrap generateText(...), streamText(...), and ToolLoopAgent
  • carry native tool-start history into the next boundary preflight
  • preserve clean per-invocation run context instead of leaking tool history across separate top-level calls
  • record UI approval-request tool parts truthfully

Pricing note

One protected action check equals one real boundary preflight. Taint, event ingest, execution reports, and decision reads are all included at that price. The Free Sandbox includes every language, provider, framework, and adapter. VerifiedX does not replace your orchestrator or human workflow. It returns receipts your system can keep local, route downstream, or pass upstream.
For the full raw runtime reference, see the TypeScript SDK.