Toprank: SEO and Google Ads Skills for Claude Code

April 11, 2026

|repo-review

by Florian Narr

Toprank: SEO and Google Ads Skills for Claude Code

Toprank is a Claude Code plugin that gives your agent direct read/write access to Google Search Console and Google Ads — structured as a set of SKILL.md files that tell Claude how to pull data, where to look, and what to do with what it finds.

Why I starred it

The common pattern for "AI + marketing data" is a dashboard that uses an LLM to explain numbers in natural language. Toprank doesn't do that. It gives Claude the actual API access and tells it to reason over live data and then act — pausing keywords, adjusting bids, writing structured data back to a CMS. That's a different posture.

What caught my eye was the architecture decision to encode domain knowledge as instruction documents rather than code. Each skill is essentially a spec written for Claude: here's what to check, here's the order, here's when to stop and ask.

How it works

Each skill lives in its own directory with a SKILL.md at the root, optional references/, scripts/, and evals/. The plugin.json inside .claude-plugin/ lists explicit skill paths — no autodiscovery, which means adding a new skill requires a manual registration step (a bug they actually shipped in v0.11.4, caught and fixed in the changelog).

The GSC integration is the most technically substantial piece. seo/seo-analysis/scripts/analyze_gsc.py is a 607-line Python script that makes nine independent API calls to Search Console, runs them concurrently via ThreadPoolExecutor, and outputs a structured JSON blob the skill then feeds to Claude:

tasks = {
    "summary":      lambda: pull_summary(token, args.site, start, end),
    "queries":      lambda: pull_top_queries(token, args.site, start, end),
    "pages":        lambda: pull_top_pages(token, args.site, start, end),
    "buckets":      lambda: pull_position_buckets(token, args.site, start, end),
    "comparison":   lambda: pull_period_comparison(token, args.site, 28),
    "devices":      lambda: pull_device_split(token, args.site, start, end),
    "countries":    lambda: pull_country_split(token, args.site, start, end),
    "search_types": lambda: pull_search_type_split(token, args.site, start, end),
    "qp_rows":      lambda: pull_query_page_rows(token, args.site, start, end),
}

with ThreadPoolExecutor(max_workers=len(tasks)) as pool:
    futures = {pool.submit(fn): name for name, fn in tasks.items()}

The parallelization comment in the source says this cuts wall-clock from ~9 sequential round-trips down to the slowest single call. It's stdlib-only — no httpx, no aiohttp.

The Google Ads side works differently. Instead of custom scripts, it routes through an MCP server (adsagent.org) auto-configured via .mcp.json:

{
  "mcpServers": {
    "adsagent": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://adsagent.org/api/mcp",
               "--transport", "http-first",
               "--header", "Authorization:Bearer ${ADSAGENT_API_KEY}"]
    }
  }
}

The preamble.md shared across all ads skills handles the full connection flow: API key verification, config resolution from three locations (project root, Claude project dir, global fallback), MCP server detection using dynamic prefix matching, and account selection. The prefix detection is worth noting — it scans available tools for anything ending in listConnectedAccounts rather than hardcoding mcp__adsagent__, which means the same skill works whether you're running Claude Code CLI or Claude Desktop.

The ads-audit skill persists a business-context.json after the first run — business name, industry, competitors, keyword landscape, brand voice — so downstream skills (ads, ads-copy, ads-landing) don't repeat the account discovery. That's a sensible shared-state pattern for a multi-skill agent workflow.

The gemini skill is an odd but deliberate addition: it shells out to the Gemini CLI to get a second opinion on Google Ads or SEO decisions, specifically because Gemini has native Google ecosystem knowledge. The implementation just checks whether gemini is on the PATH before proceeding.

Using it

Install via the Claude Code plugin marketplace:

/plugin marketplace add nowork-studio/toprank
/plugin install toprank@nowork-studio

For Google Ads you need a free API key from adsagent.org. For Search Console, the script uses gcloud auth application-default login with the webmasters scope — standard ADC flow.

/toprank:ads-audit
/toprank:seo-analysis https://example.com
/toprank:ads  review last week's changes

The ads skill distinguishes between direct-action requests ("pause keyword X") and change-impact reviews ("how are my changes doing?") and routes differently for each. Direct actions require confirmation before writes; review requests pull before/after metrics and score recent changes as wins, losses, or too-new-to-judge.

Rough edges

The repo has an LLM-judge eval suite (test/test_skill_llm_eval.py) that uses Gemini Flash as a judge to assess SKILL.md quality — which is an unusual but reasonable approach given the codebase is mostly instruction documents rather than code. The tests are gated behind EVALS=1 and currently only cover a subset of skills. Unit tests exist in test/unit/ but the coverage is thin.

Documentation is functional but sparse. The preamble.md files are the most detailed writing in the repo; individual skill SKILL.md files tend to be terse on error handling. The ads skills also carry a policy-freshness check — they read a policy-registry.json and flag potentially stale Google Ads policy assumptions — but how those entries get updated in practice isn't explained anywhere.

The Google Ads MCP dependency (adsagent.org) is a third-party service you don't control. There's a Google Ads MCP alternative listed, but the skills are built around AdsAgent's specific tool shape. Switching would require rework.

Bottom line

If you're running Google Ads or monitoring organic search and already live in Claude Code, this is a direct upgrade over checking dashboards manually. The engineering is solid where it matters — the GSC data pipeline and the shared-preamble pattern are both thoughtfully built — though the MCP service dependency is something to weigh before putting it in a production workflow.

nowork-studio/toprank on GitHub
nowork-studio/toprank