Production · pip install agentveil

Operational Trust
for autonomous agents

Decide which agents can act — then prove what happened with cryptographic evidence.

Trust-gated action for agent systems, with evidence clients and reviewers can verify independently.

Run the Loop → Get Started GitHub
Featured Listed on Glama MCP Directory · 12 tools
∕∕ TRUST VERIFY
Trusted Review Blocked
>>> agent.can_trust("did:key:z6Mk...", min_tier="trusted")

{
  "allowed": true,
  "score": 0.82,
  "tier": "trusted",     VERIFIED
  "risk_level": "low"
}
Live
24/7 Uptime
Daily
IPFS Anchors
W3C VC
Offline-Verifiable
Tamper-Evident
Audit Chain

>Start with one workflow. Expand when trust becomes operational.

Use AVP as a developer integration first, then roll it into live decisioning and evidence for production workflows.

For builders

Build trust checks into agent workflows

For developers and product teams that want to add trust decisions and signed evidence without replacing their existing stack.

  • Python SDK, REST API, and MCP server
  • CrewAI, LangGraph, AutoGen, OpenAI, Claude MCP, Paperclip
  • Works alongside your existing stack
  • Start locally, enforce in production later
Get started →
For operators

Run trust-gated agent actions in production

For teams that need to gate risky actions, detect trust degradation, and produce evidence others can verify independently.

  • Start with one critical workflow
  • Alerts, monitoring, and dispute review
  • Evidence for clients and reviewers
  • Adopt gradually — start with trust checks, move to gated actions
See packages →

>Who AVP is for

Built for teams shipping agents into real workflows — especially when actions need to be gated, monitored, and proven afterward.

AI agencies and studios

Give clients independent proof of what your agents did — without asking them to trust your dashboards or internal logs.

Multi-agent teams

Check trust before delegation and monitor trust drift over time. Add coverage quickly with @avp_tracked when you’re ready to implement.

SaaS products with agent features

Add trust decisions before sensitive actions and signed evidence after them — on top of the stack you already use.

EU AI ACT

Client-facing or high-stakes workflows

Produce client-ready evidence for agent actions that can be verified independently, including in high-stakes workflows. See Art. 9 / 12 / 13 / 14 / 50 mapping →

>How It Works

AVP separates the trust loop into three operational stages: decision before action, monitoring during execution, and evidence after the fact.

Before
Admission Gate
Verify agent identity
Evaluate trust before action
Allow, block, or flag execution
During
Continuous Monitoring
Track trust as work happens
Detect score drops, anomalies, and trust drift
Trigger alerts before issues spread
After
Forensic Record
Produce tamper-evident evidence
Export verifiable credentials
Support disputed attestation review
Scope · what AVP is not

AVP is not a policy engine, MCP gateway, or sandbox runtime. It is the trust-decision and evidence layer between identity and action. AVP does not replace your identity or governance stack — it adds trust decisions before action and cryptographic evidence after action.

>Independent proof for agent actions

AVP turns agent actions into cryptographic evidence that clients, reviewers, and partners can verify independently — without relying on your internal infrastructure.

Step 1
Agent acts
Step 2
Evidence is signed
Step 3
Record is chain-anchored
Step 4
Credential is shared
Step 5
Third party verifies offline
curl agentveil.dev/v1/reputation/{did}/credential?format=w3c

eddsa-jcs-2022 cryptosuite · RFC 8785 JCS · verifies with didkit, vc-js, Digital Bazaar

>Why AVP, not just logging?

AVP does not replace observability. It adds what logs cannot: a trust decision before action, tamper-evident evidence after action, and proof others can verify independently.

Logging alone
  • Editable by the operator
  • Describe what happened after the fact
  • No portable trust decision
  • No offline-verifiable credential
  • No pre-action delegation gate
  • No dispute layer
AVP with logging
  • Ed25519-signed, tamper-evident chain
  • IPFS-anchored — anyone can verify
  • can_trust() decision before action
  • W3C VC credentials, offline-verifiable
  • Agent and team reputation over time
  • Dispute review support

>Fits the stack you already have

Integrate AVP without replacing your framework, identity provider, or observability tooling.

Frameworks

CrewAI
LangGraph
AutoGen
OpenAI
Claude
📎 Paperclip
🐍 Any Python

Interfaces

Python SDK

pip install agentveil — one line, zero config.

REST API

Any language. Full documentation & guides.

MCP Server

Claude Desktop, Cursor, Windsurf, VS Code. 12 tools.

Enterprise fit

Works with your existing identity stack

AVP adds trust decisions before action and evidence after action — not a replacement for your current systems.

✔ MERGED

Composes with Microsoft AGT

AVPProvider merged upstream into Microsoft Agent Governance Toolkit (PR #1010) as a TrustProvider implementation.

>Production API

Use AVP through production endpoints for reputation, trust checks, credentials, attestations, and audit verification.

Start with advisory checks. Move to gated actions when you’re ready.

Method Endpoint Description
GET/reputation/{did}Score, confidence, risk assessment
GET/reputation/{did}/trust-checkAdvisory trust decision
GET/reputation/{did}/credentialSigned offline credential (Ed25519)
GET/reputation/{did}/velocityScore trend (1d/7d/30d)
POST/attestationsSubmit peer rating
POST/attestations/batchBatch ratings (up to 50)
POST/agents/registerRegister new agent
GET/cardsSearch agents by capability
GET/audit/verifyChain integrity check

SDK: github.com/agentveil-protocol/avp-sdk · PyPI

>Start in minutes

Try AVP locally with a mock agent, then connect the same workflow to production when you’re ready.

# Install
pip install agentveil

# Try instantly — no server needed
from agentveil import AVPAgent

agent = AVPAgent.create(mock=True, name="my_agent")
agent.register(display_name="My Agent")
rep = agent.get_reputation(agent.did)
print(rep)  # {'score': 0.75, 'confidence': 0.5, ...}
Connect to production server
# One line to auto-register and auto-attest an agent
# inside an existing workflow
from agentveil import avp_tracked

@avp_tracked("https://agentveil.dev", name="my_agent", to_did="did:key:z6Mk...")
def review_code(pr_url: str) -> str:
    return analyze(pr_url)

>Adopt AVP in stages

Start free in development. Pilot one trust-gated workflow. Expand to multi-team or high-stakes environments when you need full rollout.

FREE

Build

For developers and teams validating trust checks in development

  • Full SDK + 12 MCP tools
  • REST API + 7 framework integrations
  • W3C VC credentials, offline-verifiable
  • can_trust() advisory decisions
  • Community support
pip install agentveil
RECOMMENDED

Pilot

Prove trust and evidence on a single critical action before wider rollout

  • Everything in Build
  • Trust gate on a critical action
  • Alerts + webhook integration
  • Audit trail export + dispute review
  • 30-day scope with rollout guidance
Discuss a pilot →
ENTERPRISE

Deploy

Operationalize trust gating and independently verifiable evidence across production systems

  • Everything in Pilot
  • Multiple workflows, custom thresholds
  • Private deployment options
  • Compliance & security review path
  • Priority support + SLA
Discuss deployment →
', 'text/html' ); var tmp = doc.getElementById('_tmp'); var nodes = Array.prototype.slice.call(tmp.childNodes); panel.replaceChildren.apply(panel, nodes); labelEl.textContent = frame.label; } window.runTheLoop = function(){ if(LOOP_RUNNING) return; var panel = document.getElementById('loop-panel'); var lbl = document.getElementById('loop-step-label'); var btn = document.getElementById('run-loop-btn'); if(!panel||!lbl) return; LOOP_RUNNING = true; if(btn){btn.style.pointerEvents='none';btn.style.opacity='0.6';btn.textContent='Running\u2026';} var i = 0; function step(){ panel.style.transition = 'opacity 0.28s ease'; panel.style.opacity = '0.15'; setTimeout(function(){ applyFrame(panel, lbl, LOOP_FRAMES[i]); panel.style.opacity = '1'; i++; if(i < LOOP_FRAMES.length){ setTimeout(step, 2300); } else { setTimeout(function(){ if(btn){btn.style.pointerEvents='';btn.style.opacity='';btn.textContent='Run again \u2192';} LOOP_RUNNING = false; }, 1900); } }, 160); } step(); }; })();