gsc is a single static binary that wraps the Google Search Console API v1, the Chrome UX Report API, and PageSpeed Insights. We built this at Klixpert because every time we needed GSC data in a script or an agent, we ended up copy-pasting OAuth boilerplate. This is the tool we wanted to exist.
What it does
Query GSC properties, inspect URLs, manage sitemaps, and pull Core Web Vitals data — all from the terminal. JSON by default, CSV and table output on request, TTY auto-detected so you don't get ANSI garbage in pipes.
Why I starred it
GSC has an API. Almost nobody uses it directly, because the auth setup is annoying and the raw responses are verbose. The alternatives are either paid SaaS or brittle Python wrappers that break whenever Google rotates a field name.
What caught my eye here was the design constraint: every response ships inside a consistent JSON envelope with cache metadata attached, and every error is a typed JSON object on stderr with a machine-readable code and exit code. That's a tool built to be composed, not just used interactively.
The other thing: it ships a SKILL.md that teaches LLM coding agents how to drive it safely — quota awareness, exit codes, the JSON envelope format. That's not a readme afterthought; it's a first-class interface for AI workflows.
How it works
The entry point in cmd/gsc/main.go is nine lines. It loads config, fires off a background goroutine for auto-update (non-blocking, recover()-protected so it can never crash the main process), then hands off to internal/cmd.
The cache in internal/cache/cache.go is a flat-file TTL store. Keys are SHA-256 hashes of command path + normalized args + property + identity — so the same query from different working directories hits the same cache entry. It uses a two-level directory structure (first two hex chars as a directory prefix) to avoid inode pressure at scale. Writes are atomic: it stages to a .tmp file and renames, so a crash mid-write can't corrupt a cached entry.
// cache.go: key derivation
func Key(cmdPath string, args []string, property, identity string) string {
sorted := append([]string(nil), args...)
sort.Strings(sorted)
h := sha256.New()
h.Write([]byte(cmdPath))
h.Write([]byte{0})
for _, a := range sorted {
h.Write([]byte(a))
h.Write([]byte{0})
}
h.Write([]byte(property))
h.Write([]byte{0})
h.Write([]byte(identity))
return hex.EncodeToString(h.Sum(nil))
}
The quota store in internal/quota/quota.go is worth reading. It uses flock (platform-specific: lock_unix.go / lock_windows.go) on the quota file before every read-modify-write — necessary because gsc urls inspect reading from stdin can run many concurrent requests. The rolling QPM window for Search Analytics is in-memory (acceptable per the PRD comment: "best-effort across restarts"), but daily URL Inspection counts are durable. Quota resets are keyed to America/Los_Angeles timezone, which is what Google uses.
// quota.go: rolling QPM window
func (s *Store) BumpSA() error {
s.mu.Lock()
now := time.Now()
cutoff := now.Add(-60 * time.Second)
kept := s.saEvents[:0]
for _, t := range s.saEvents {
if t.After(cutoff) {
kept = append(kept, t)
}
}
kept = append(kept, now)
s.saEvents = kept
rate := len(kept)
s.mu.Unlock()
// ...
}
The error system in internal/errs/errs.go maps typed error codes to specific exit codes — auth errors are 2, quota/rate errors are 3, not-found is 4, validation is 5, network is 6. Every error renders as a JSON line on stderr even in CSV output mode, so you can always 2>errors.json and parse failures separately from data.
Auto-update in internal/update/update.go is fully self-contained: it checks GitHub releases at most once per 24 hours (throttled via a state file), verifies SHA-256 against a checksums.txt artifact, then does an atomic rename swap of the running binary. It detects managed installs (Homebrew, snap, etc.) and skips. The whole thing runs in a goroutine with defer recover() — it cannot affect the main process regardless of what GitHub returns.
Using it
# Top 50 queries, mobile, last 28 days
gsc analytics query sc-domain:example.com \
--dimensions query --filter device=MOBILE --limit 50
# Auto-paginate past the 25k row cap, stream to CSV
gsc analytics query sc-domain:example.com \
--dimensions query,page --all --output csv > queries.csv
# Bulk URL inspection from stdin
cat urls.txt | gsc urls inspect sc-domain:example.com -
# CWV triage — fails CI when any metric is poor
gsc cwv https://example.com/pricing --fail-on poor
# Check remaining quota
gsc quota
The JSON envelope is consistent across every command:
{
"data": { "...": "..." },
"meta": {
"cached": true,
"cached_at": "2026-04-15T14:30:00Z",
"ttl_remaining_sec": 543,
"api_calls": 0
}
}
api_calls: 0 on a cache hit means you can run the same query in a loop (say, from an agent polling for changes) without burning quota.
Rough edges
The GCP setup is genuinely tedious. You need an OAuth client for GSC/PSI and a separate API key for CrUX — two different credential flows for one tool. The README explains this thoroughly, but there's no escaping the five-tab Google Cloud Console dance before you can run a single command.
Test coverage is absent. I didn't find any *_test.go files in the source tree — the entire test surface is manual. For a tool that interacts with a quota-limited API, that's a gap. The quota logic in particular (flock, date rollover, rolling windows) is exactly the kind of code that benefits from unit tests.
The skill file for LLM agents lives at skills/gsc-cli as a directory — npx skills add installs it, but the format is specific to tools that support the skills protocol. If your agent doesn't, you get nothing.
Bottom line
If you're scripting GSC data pulls, building SEO pipelines, or feeding search performance data into an LLM agent, this is the right abstraction. The cache and quota tracking alone save you from the quota exhaustion issues that make raw API usage painful at scale.
