chrome-devtools-mcp is an MCP server maintained by the Chrome DevTools team that gives coding agents — Claude, Gemini, Cursor, Copilot — full access to Chrome's debugging APIs. Navigation, input automation, network inspection, performance tracing, Lighthouse audits, and memory snapshots, all exposed as MCP tools a language model can call directly.
Why I starred it
Most browser automation tools for agents are wrappers around Playwright or Puppeteer that expose click, navigate, and screenshot. Useful, but shallow. This one is maintained by the Chrome DevTools team itself and goes further: it embeds DevTools' own trace engine and exposes insights the browser already generates but normally only humans read in the Performance tab.
The --autoConnect flag in Chrome 144+ is the thing that really caught my attention. It lets the MCP server attach to your existing Chrome session without a separate debugging port — the browser shows a permission dialog and you click Allow. Your agent gets eyes on the browser you're already using, with no extra process to manage.
How it works
The architecture is cleaner than you'd expect for a tool covering this much surface area.
src/browser.ts handles connection lifecycle. When no --browserUrl or --wsEndpoint is provided, it reads DevToolsActivePort from the Chrome user data directory to find the debugging socket:
const portPath = path.join(userDataDir, 'DevToolsActivePort');
const fileContent = await fs.promises.readFile(portPath, 'utf8');
const [rawPort, rawPath] = fileContent.split('\n').map(line => line.trim()).filter(Boolean);
const browserWSEndpoint = `ws://127.0.0.1:${port}${rawPath}`;
That's the same file Chrome writes when you enable remote debugging — the MCP server just reads it instead of making you pass a port manually.
The bridge between Puppeteer and DevTools' own frontend code lives in src/DevToolsConnectionAdapter.ts. The class PuppeteerDevToolsConnection implements the CDPConnection interface that DevTools' frontend uses internally, routing all CDP calls through a Puppeteer CDPSession. This means the server isn't reimplementing trace parsing or insight extraction — it's running DevTools' actual TypeScript modules, just without the UI:
send<T extends DevTools.CDPConnection.Command>(
method: T,
params: DevTools.CDPConnection.CommandParams<T>,
sessionId: string | undefined,
): Promise<{ result: ... } | { error: ... }> {
const session = this.#connection.session(sessionId);
// routes to puppeteer CDPSession
}
Performance trace analysis happens in src/trace-processing/parse.ts. It instantiates DevTools.TraceEngine.TraceModel.Model.createWithAllHandlers() — the same engine that powers Chrome DevTools' Performance panel — decodes the raw trace buffer, and extracts the insights object that contains pre-computed Core Web Vitals and bottleneck analysis:
const engine = DevTools.TraceEngine.TraceModel.Model.createWithAllHandlers();
export async function parseRawTraceBuffer(buffer) {
engine.resetProcessor();
const events = Array.isArray(data) ? data : data.traceEvents;
await engine.parse(events);
return { parsedTrace: engine.parsedTrace(), insights: parsedTrace.insights };
}
Agents don't get a raw trace file — they get structured insight objects they can act on. That's the non-obvious part: the repo is shipping the DevTools analysis engine as a library, not just a debugging interface.
Page state is tracked per-tab in src/McpPage.ts. Each McpPage wraps a Puppeteer Page with snapshot state, emulation settings, dialog tracking, and an inPageTools slot for tools injected into the page context. The dialog handler is set up in the constructor and cleaned up on dispose, which is the kind of lifecycle management that's easy to get wrong and they got right.
The tool set is organized into categories (ToolCategory enum in src/tools/categories.ts) and you can toggle whole categories off at startup. --slim mode drops everything to three tools — navigate, evaluate script, screenshot — for cases where you want a lightweight browser handle without the full DevTools surface.
Using it
Add to your MCP config:
{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["-y", "chrome-devtools-mcp@latest"]
}
}
}
For Claude Code specifically:
claude mcp add chrome-devtools --scope user npx chrome-devtools-mcp@latest
Connecting to your running browser instead of spawning a new one:
npx chrome-devtools-mcp@latest --autoConnect
# or via remote debugging port:
npx chrome-devtools-mcp@latest --browser-url=http://127.0.0.1:9222
Then ask the agent:
Check the performance of https://developers.chrome.com
The agent will call navigate_page, then performance_start_trace, then performance_stop_trace, then performance_analyze_insight — you watch the trace happen in real Chrome, and get back structured LCP/INP/CLS data the agent can reason about and suggest fixes for.
Rough edges
Usage statistics are on by default and phone home to Google. You opt out with --no-usage-statistics or by setting CHROME_DEVTOOLS_MCP_NO_USAGE_STATISTICS. That's reasonable but worth knowing before you run it in a work context.
It officially supports only Google Chrome and Chrome for Testing. Chromium works in practice but isn't guaranteed — the DevToolsActivePort parsing and the internal DevTools modules are tested against Chrome stable only.
The --experimental-vision flag enables coordinate-based tools like click_at(x,y), but the flag name signals the state accurately. It requires a computer-use model that can produce reliable screen coordinates, and the docs are thin on which models actually work well here.
Test coverage is present (tests/ directory exists) but I wouldn't call it comprehensive for a project at this scale. The trace processing path in particular is doing a lot of work with no apparent unit tests against known trace fixtures.
The security model deserves attention: the server exposes browser contents to whatever MCP client you connect. If you're running it against a browser session that has sensitive tabs open, you're giving your agent access to everything. The --isolated flag creates a fresh profile to address this, but you have to opt in.
Bottom line
If you're building coding agents that need to interact with or debug web applications, this is the most complete browser tooling available for MCP. The performance trace pipeline — real DevTools analysis engine, not a Lighthouse wrapper — is genuinely useful and not something you'd want to reimplement.
