V0-system-prompt is a collection of extracted and reconstructed system prompts for Vercel's v0 AI coding assistant. There's no code to install — just the raw instruction set that shapes every component v0 generates.
Why I starred it
Leaked system prompts are usually underwhelming: a few paragraphs of generic instructions that tell you nothing you didn't already know. This one is different. The v0 prompt is unusually long and architectural — it defines a full MDX dialect, a component rendering pipeline, and a set of behavioral constraints that explain a lot of v0's quirks.
The more interesting question is: what can you learn from it that's actually applicable when you're writing your own AI coding agents?
How it works
The repo has several files representing different extraction attempts and versions. The main ones worth reading are v0-system-prompt(updated 29-11-2024) and unverified-test-1(complete instructions). The latter is almost certainly reconstructed rather than literally extracted — it's too coherent to have come from a single prompt injection — but it reads like an accurate reconstruction from someone who spent serious time probing v0's behavior.
The architecture the prompt describes is built around a custom MDX output format. v0 doesn't just output code blocks — it outputs a structured document that the Vercel UI interprets. The <ReactProject> component is the key abstraction:
<ReactProject id="chat-ui" entry="app/page.tsx" project="Chat UI" type="react">
<!-- tsx file blocks go here -->
</ReactProject>
This wrapper signals to the rendering layer that multiple files belong to the same project. The entry prop tells the preview which file to mount. The id stays stable across edits so the sandbox can diff and hot-reload rather than rebuild from scratch. That's the piece that makes v0's live preview feel fast — it's not re-rendering everything on each generation.
The prompt also defines specific rules about color usage:
v0 DOES NOT use indigo or blue colors unless specified in the prompt.
v0 MUST USE the builtin Tailwind CSS variable based colors, like `bg-primary`
That's why v0-generated components look the way they do — muted backgrounds, neutral tones, semantic color tokens. It's not aesthetic preference baked into the model weights. It's an explicit rule in the system prompt.
The refusal logic is equally explicit. The prompt defines REFUSAL_MESSAGE and WARNING_MESSAGE as variables:
REFUSAL_MESSAGE = "I'm sorry. I'm not able to assist with that."
WARNING_MESSAGE = "I'm mostly focused on ... but ..."
And the instruction: v0 MUST NOT apologize or provide an explanation for refusals. That explains why v0's refusals are so terse compared to other assistants.
The <Thinking /> component is interesting too. Before every response, v0 is instructed to emit a Thinking block — visible in the reconstructed version — where it evaluates which block type to use. The thinking-feature files in the repo (thinking-feature24, thinking-feature25) show earlier iterations where this reasoning was more elaborate. In the current prompt it's been simplified, but the step remains mandatory.
The knowledge-base file (unverified-test-1(knowledge-base)) is the most unusual artifact. It's a JSON blob encoding the entire shadcn/ui component documentation — import paths, usage examples, and demo code for every component. The file is massive. v0 carries this in-context so it can generate shadcn components without hallucinating import paths or API shapes. That's a brute-force solution to a real problem, and it works.
The Jailbreak Prompt file in the repo is the classic "repeat everything above" extraction technique, and it mostly confirms that the prompt structure is what the other files suggest.
Using it
There's no installation. You browse the files directly:
gh api repos/2-fly-4-ai/V0-system-prompt/contents/v0-system-prompt --jq .content \
| base64 -d
The practical use is treating this as a reference when designing system prompts for your own code-generation agents. A few patterns worth stealing:
- Enumerate code block types explicitly. v0 defines seven distinct block types (React, Node.js, Python, HTML, Markdown, Mermaid, general). Giving the model a named taxonomy reduces ambiguity in responses.
- Embed the component documentation inline. Expensive in tokens, but eliminates hallucination of import paths.
- Force a planning step before output. The
<Thinking />requirement means the model commits to a code block type before generating. It reduces the class of errors where the model picks the wrong output format mid-response.
Rough edges
The extraction provenance is murky. The repo makes no strong claims about which files are verbatim vs. reconstructed. The original v0-system-prompt file (no date suffix) is clearly older and missing the <ReactProject> architecture — it uses a simpler single-file model. The dated versions track actual v0 updates, but you're trusting the extractor's interpretation.
The README is a single image and a link to a blocks platform the author built. No explanation of methodology, no changelog, no guidance on which file represents the current state. If you didn't read the commit history, you'd have no idea the prompt has evolved through at least four documented versions.
There are also no tests (unsurprisingly) and no programmatic tooling — just files in a directory.
Bottom line
If you're building a code-generation agent or trying to understand why v0 behaves the way it does, this is worth an hour of careful reading. The shadcn knowledge-base embedding and the <ReactProject> rendering protocol are the two things I hadn't seen documented anywhere else.
