Vercel Agent Skills: Curated Rules for AI Coding Agents

January 15, 2026

|repo-review

by Florian Narr

Vercel Agent Skills: Curated Rules for AI Coding Agents

What it does

agent-skills is Vercel's official collection of skills for AI coding agents. Each skill is a self-contained package — a SKILL.md file with instructions, optional scripts, and reference docs — that agents like Claude Code or Cursor load on demand. Six skills ship today: React performance rules, web design guidelines, React Native patterns, view transitions, composition patterns, and a deploy-to-Vercel workflow.

Why I starred it

Most "awesome lists" for AI agents are just link dumps. This repo is different — it's structured knowledge designed to be consumed by machines. Each skill follows a strict format with frontmatter metadata, trigger phrases, and progressive disclosure so agents only load what they need. The React best practices skill alone packs 69 rules across 8 categories, prioritized by impact. That's not a blog post repackaged. That's an engineering team's internal playbook turned into machine-readable instructions.

How it works

The repo's architecture splits into two layers: the skills themselves and the build tooling that compiles them.

Skill structure

Each skill lives in skills/<name>/ with a consistent layout:

skills/react-best-practices/
  SKILL.md       # Agent-facing instructions with frontmatter
  AGENTS.md      # Full compiled ruleset (generated)
  metadata.json  # Version, references, abstract
  rules/         # Individual rule files
  README.md      # Human-facing docs

The SKILL.md files use YAML frontmatter with name, description, and metadata fields. Agents read the description first to decide whether to activate the skill — the full content only loads when relevant. This is the progressive disclosure pattern mentioned in AGENTS.md at the repo root: "Keep SKILL.md under 500 lines — put detailed reference material in separate files."

The build pipeline

The most interesting part is in packages/react-best-practices-build/. A custom TypeScript pipeline reads individual rule markdown files, parses their frontmatter and content structure, validates them, then compiles everything into a single AGENTS.md.

The parser in packages/react-best-practices-build/src/parser.ts does the heavy lifting. It extracts frontmatter, finds code examples by label pattern (**Incorrect:**, **Correct:**), and maps rules to sections based on filename prefixes:

// parser.ts — section inference from filename
const filenameParts = filename.replace('.md', '').split('-')
let section = 0

for (let len = filenameParts.length; len > 0; len--) {
  const prefix = filenameParts.slice(0, len).join('-')
  if (effectiveSectionMap[prefix] !== undefined) {
    section = effectiveSectionMap[prefix]
    break
  }
}

A file named async-parallel.md gets mapped to section 1 (Eliminating Waterfalls) because the async prefix resolves to section 1 in the default section map. The longest-prefix-first matching means a hypothetical list-performance-virtual.md would try list-performance-virtual, then list-performance, then list before falling back.

Validation

The validator in validate.ts enforces structure: every rule needs a title, an explanation, at least one code example, and a valid impact level from CRITICAL down to LOW. It explicitly checks for both "incorrect" and "correct" examples — rules aren't just advice, they're before/after transformations.

const hasBad = codeExamples.some(e =>
  e.label.toLowerCase().includes('incorrect') ||
  e.label.toLowerCase().includes('wrong') ||
  e.label.toLowerCase().includes('bad')
)

This runs in CI via GitHub Actions on every push to main that touches the rules or build tooling.

Rule quality

The individual rules are genuinely well-written. Take async-parallel.md — it's concise, shows the wrong pattern (sequential awaits), the fix (Promise.all), and tags the impact as CRITICAL with "2-10x improvement." No fluff. Each rule is a self-contained refactoring instruction an agent can apply mechanically.

Using it

Installation is a single command:

npx skills add vercel-labs/agent-skills

Skills are then available automatically. The agent activates them based on context — ask it to "review my React component for performance" and the react-best-practices skill loads.

For manual setup, copy a skill directory into ~/.claude/skills/:

cp -r skills/react-best-practices ~/.claude/skills/

The web-design-guidelines skill takes a different approach — its SKILL.md is only 39 lines. Instead of bundling rules locally, it fetches the latest guidelines from a remote URL at runtime:

## Guidelines Source
Fetch fresh guidelines before each review:
https://raw.githubusercontent.com/vercel-labs/web-interface-guidelines/main/command.md

Always-fresh rules vs. local snapshots. Two valid strategies in one repo.

Rough edges

The repo is heavily focused on React and Vercel's ecosystem. If you're working with Vue, Svelte, or anything outside the Next.js orbit, there's nothing here for you yet. The repositoryTopics field is null — no GitHub topics set, which makes discoverability harder than it should be for a repo with 24k+ stars.

The build tooling only covers three of the six skills (react-best-practices, react-native-skills, composition-patterns). The deploy-to-vercel and view-transitions skills are maintained by hand, with the zip files committed directly. No consistent build process across all skills.

There's also no test suite beyond the CI validation — no integration tests that actually feed rules to an agent and verify the output. The validator checks structure but can't catch a rule that gives bad advice.

Bottom line

If you're using AI coding agents with React or Next.js, install this. The rules are high-quality, the format is designed for machine consumption, and the build pipeline is a solid reference for anyone packaging knowledge for agents.

vercel-labs/agent-skills on GitHub
vercel-labs/agent-skills