PLATFORM

You have AI agents in production.Do you know what they're doing right now?

TrustScope gives you trace-level visibility into every AI agent action — PII exposure, prompt injection, cost spikes, behavioral drift, and tool call anomalies — across every ingestion path, in real time.

THE PROBLEM

Most teams discover AI agent problems from their customers.

An engineer's AI tool makes 200 database calls per session. Nobody knows.

A support bot leaks a customer's SSN. Discovery happens in audit review, not production monitoring.

A coding agent retries failed API calls 47,000 times overnight. The team finds out from Monday's billing alert.

These aren't hypotheticals. They're patterns from real production deployments. TrustScope catches them in real time.

DETECTION DEPTH

27 engines. Three tiers of intelligence.

Rule-based, ML-assisted, and AI-hybrid engines layered by tier so you pay only for the depth you need.

Monitor

Free

15 engines included

Rule-based runtime controls. Full dashboard. No credit card.

Cost & Loops (7)

  • Loop Killer
  • Velocity Monitor
  • Cost Velocity
  • Budget Caps
  • Token Growth
  • Context Expansion
  • Oscillation Detector

Content & Secrets (3)

  • Secrets Scanner
  • Blocked Phrases
  • Action Label Mismatch

Security (3)

  • Prompt Injection (pattern)
  • Jailbreak Detector (pattern)
  • Command Firewall

Behavioral (2)

  • Error Rate
  • Session Duration

Protect

$49/mo

+5 engines

ML-assisted controls for contextual risk routing and stronger blocking.

ML Engines (5)

  • PII Scanner (Presidio NER)
  • Prompt Injection (ONNX)
  • Jailbreak Detector (ONNX)
  • Toxicity Filter
  • Data Exfiltration

Enforce

$199/mo

+7 engines

AI-hybrid controls for advanced runtime reasoning and behavioral intelligence.

AI-Hybrid Engines (7)

  • Semantic Firewall
  • Hallucination Detector
  • Reasoning Drift
  • Reasoning Quality Monitor
  • A2A Depth
  • Tool Parameter Validator
  • Bias Monitor

AI-hybrid engines run on your LLM provider. TrustScope never pays for or stores your LLM calls. Bring your own key.

BEHAVIORAL INTELLIGENCE

Agent DNA: know when your agent changes.

Every agent develops a behavioral fingerprint over time — its tool call patterns, cost curve, error recovery style, and reasoning quality. TrustScope captures this fingerprint as an 8-strand Agent DNA profile and alerts you the moment behavior drifts from baseline.

8-strand behavioral analysis

Tool Call Distribution

Frequency and ordering of tool invocations across sessions

Token Velocity Profile

Token consumption rate patterns per agent over time

Error Response Pattern

How the agent handles and recovers from failure states

Session Duration Shape

Characteristic session length and activity distribution

Delegation Depth

Agent-to-agent handoff frequency and chain depth

Cost Curve Signature

Spend pattern shape across session lifecycle

Content Risk Profile

Baseline content risk exposure levels per agent

Reasoning Quality Trend

Reasoning chain quality trajectory over time

Baseline comparison: TrustScope builds a rolling baseline from your first 50 traces and continuously compares new sessions against it.

Drift alerts: When any strand deviates beyond your configured threshold, TrustScope fires a drift alert with full strand-level detail.

Available at the Enforce tier. Agent DNA runs across traces, not on individual requests.

UNIVERSAL GOVERNANCE

One engine. Nine ingestion paths.

The same 27 detection engines run regardless of how your data arrives. Gateway, SDK, MCP, OTel, CLI — TrustScope governance is input-agnostic.

Full inline governance

Real-time detection + blocking + evidence generation

Gateway Proxy

Swap your base URL. Zero code changes.

Python SDK

Decorator and callback hooks for deep control.

Node.js SDK

Middleware integration for JS/TS agent stacks.

Endpoint Bridge

REST-based governance for any language or runtime.

Detection + alerting

Observe and alert without inline blocking

MCP Server

IDE-native governance for Claude, Cursor, Windsurf.

Framework Callbacks

LangChain, CrewAI, AutoGen, OpenAI Agents, and more.

Visibility + evidence

Offline analysis, batch scanning, and audit

CLI Scan

Local trace analysis before cloud deployment.

Direct API

JSON, JSONL, CSV, TSV, HAR trace import.

OTel Fanout

Fan out existing OpenTelemetry spans to TrustScope.

6 of 9 paths support real-time blocking. All 9 produce governance evidence.

COST GOVERNANCE

Per-agent spend visibility. Budget enforcement.

Every governed trace includes cost telemetry — token usage, model pricing, and per-agent spend attribution. Set budget caps that alert or block at the thresholds you define. Detect infinite retry loops and runaway delegation cascades before they hit your invoice.

Example budget policy

policies:
  - name: support-bot-budget
    agent: support-bot
    model: gpt-4o
    daily_limit_usd: 150
    alert_at: 0.8        # alert at 80% of daily limit
    block_at: 1.0        # hard stop at 100%
    loop_kill: true       # terminate infinite retry loops
    velocity_cap: 60      # max requests per minute

Most teams start with alert at 80% and block at 100%. Loop kill and velocity caps catch runaway patterns before budget thresholds are reached.

Per-agent attribution

Track spend by agent, model, and team member. See exactly which agent is driving cost and why.

Loop detection

Infinite retry loops, oscillation patterns, and recursive delegation chains detected and terminated automatically.

Velocity caps

Set per-minute request limits to catch burst patterns before they translate into uncontrolled spend.

See what your agents are doing. Start free.

15 detection engines. Full dashboard. No credit card.