Agent Vault: a credential proxy that keeps secrets out of your AI agents

April 22, 2026

|repo-review

by Florian Narr

Agent Vault: a credential proxy that keeps secrets out of your AI agents

Secrets management for AI agents is a mess. You either bake credentials into system prompts (terrible), pass them as environment variables the agent can read (also terrible), or wire up elaborate sandboxing hoping the LLM doesn't leak what it can see. Agent Vault takes a different angle: the agent never possesses the credentials at all.

What it does

Agent Vault is a local HTTP/HTTPS proxy — written in Go by the Infisical team — that sits between your agent and any upstream API. Your agent is given HTTPS_PROXY pointing at the local proxy. When it makes a request to, say, api.github.com, Agent Vault intercepts it mid-flight, injects the right Authorization header from its encrypted store, and forwards the request. The agent sees the response, never the credential.

Why I starred it

The naive fix for "agents leak secrets" is to not give the agent secrets. That's trivially true and completely impractical until you have a layer that can fulfill API calls on the agent's behalf. Agent Vault is that layer, and the architecture is cleaner than I expected.

What caught me: the proxy is not just a dumb forwarder. The broker config lets you declare per-host auth rules — bearer tokens, basic auth, API keys with arbitrary headers, or full custom templates with {{ CREDENTIAL }} placeholders. Wildcard host matching is single-level only (*.github.com matches api.github.com but not api.v2.github.com), which is a sensible security boundary someone actually thought about rather than reached for strings.Contains.

How it works

The proxy runs on port 14322 as a TLS-encrypted listener. That last part matters: the CONNECT handshake that carries the session token travels over TLS, not plaintext. Most proxies don't bother. Agent Vault wraps the listener itself, as seen in internal/mitm/proxy.go:

func (p *Proxy) Serve(l net.Listener) error {
    p.isListening.Store(true)
    defer p.isListening.Store(false)
    return p.httpServer.Serve(tls.NewListener(l, p.tlsConfig))
}

The GetCertificate callback in that TLS config mints a per-host leaf certificate on demand via an internal CA — classic MITM termination. The CA root is distributed to the agent process via environment variables so it trusts the proxy's certs.

The credential injection happens in internal/broker/broker.go. The Auth.Resolve() method takes a getCredential lookup function and returns a map of ready-to-inject headers:

case "bearer":
    val, err := getCredential(a.Token)
    if err != nil {
        return nil, err
    }
    return map[string]string{"Authorization": "Bearer " + val}, nil

For Twilio-style APIs that embed credentials in the URL path rather than headers, there's a Substitutions system. You declare a placeholder like __ACCOUNT_SID__ in your broker config, and Agent Vault rewrites matching occurrences in the URL path or query string before forwarding. The validatePlaceholder function in broker.go enforces minimum length (4 chars) and requires either a __ prefix or a non-word boundary character — specifically to prevent bare identifiers like account_sid from being used as placeholders and accidentally matching real URL segments.

Credentials at rest use AES-256-GCM in internal/crypto/crypto.go. The master password wraps a random data encryption key (DEK) through Argon2id in kdf.go with default params of 3 iterations, 64 MiB memory, 4 threads — reasonable defaults, nothing exotic. Rotating the master password re-wraps only the DEK, not every individual credential, which is the right call.

For true isolation, there's a --isolation=container flag on agent-vault run. It spins up a Docker container with a per-invocation bridge network and iptables rules that force all egress through the proxy. The network is labeled agent-vault-isolation=1 so the pruner can clean up orphaned networks later. The internal/isolation/docker.go code is careful to reject user --mount flags that would land on reserved paths like /usr/local/sbin/init-firewall.sh — the firewall script is the trust anchor, and overwriting it pre-entrypoint would undo the egress restriction.

The dependency list is lean: Cobra for CLI, Charmbracelet's Huh for interactive prompts, SQLite via modernc.org/sqlite (pure Go, no CGO), and golang.org/x/crypto for Argon2. No external secrets backend required — it's self-contained by default.

Using it

Start the server and wrap your agent:

agent-vault server -d
agent-vault run -- claude

Agent Vault creates a session, sets HTTPS_PROXY=https://localhost:14322, injects the CA cert into SSL_CERT_FILE, NODE_EXTRA_CA_CERTS, REQUESTS_CA_BUNDLE, CURL_CA_BUNDLE, and a few others — covering Python, Node, curl, Go, Deno without any agent-side SDK. The agent calls fetch("https://api.openai.com/v1/...") as normal; the proxy handles auth.

For sandboxed agents running inside Docker or E2B, the TypeScript SDK lets you create sessions and build the proxy env for the container:

import { AgentVault, buildProxyEnv } from "@infisical/agent-vault-sdk";

const av = new AgentVault({ token: "YOUR_TOKEN", address: "http://localhost:14321" });
const session = await av.vault("default").sessions.create({ vaultRole: "proxy" });
const env = buildProxyEnv(session.containerConfig!, "/etc/ssl/agent-vault-ca.pem");
// pass env + mount the CA cert into your sandbox

The buildProxyEnv call hands back the full map of env vars to inject — every runtime covered in one call.

Rough edges

This is labeled Preview, and it shows. The project is 800 stars old and was published recently; the API is explicitly subject to change.

Container isolation is Claude-only right now. The --share-agent-dir flag that bind-mounts ~/.claude is hardcoded to the Claude agent directory. Other agents need to bring their own setup.

HTTP/2 is explicitly pinned off (ForceAttemptHTTP2: false in the upstream transport). Streaming APIs that rely on HTTP/2 multiplexing would need HTTP/1.1 chunked transfer instead — works, but worth knowing.

The TEE model is trust-the-proxy. If the proxy process itself is compromised, credentials are exposed. That's the inherent tradeoff with any credential broker, but the current docs are thin on threat model discussion beyond the README bullets. The SECURITY.md exists but is brief.

No tests in cmd/ beyond a handful of unit tests for specific helpers. The internal packages have reasonable coverage — broker_test.go, proxy_test.go, the isolation tests — but the CLI layer is largely untested.

Bottom line

Agent Vault solves a real problem and the core proxy architecture is solid. If you're running local AI agents that call external APIs and you want them to never see their own credentials, this is the cleanest approach I've seen. The container isolation mode makes it particularly useful for running coding agents you don't fully trust.

Infisical/agent-vault on GitHub
Infisical/agent-vault