A documentation site built with Nextra that catalogs prompting techniques for LLMs — from zero-shot basics to Tree of Thoughts, RAG, ReAct, and agent patterns — with paper references, prompt/output examples, and Jupyter notebooks.
Why I starred it
In early 2023 there was a wave of "prompt tips" threads on Twitter and scattered blog posts, but nothing organized by technique with actual paper citations behind it. This repo hit Hacker News #1 in February 2023 because it filled that gap — a structured reference rather than a listicle.
The taxonomy holds up. Instead of dumping a flat list of tips, the content is split into Techniques, Applications, Models, Risks, and a Prompt Hub. That structure makes it useful as a lookup resource rather than something you read once and forget.
How it works
The repo is a Nextra docs site — Next.js 13 with nextra-theme-docs. The architecture is mostly a content question: how do you manage hundreds of MDX files across 14 languages while keeping navigation consistent?
The answer is _meta.*.json files per directory. Each locale has its own metadata file defining sidebar order and labels. Open pages/techniques/_meta.en.json and you see a flat object mapping route keys to display names — Nextra reads these to build the sidebar. Each technique then lives in pages/techniques/cot.en.mdx, cot.zh.mdx, etc.
The next.config.js wires it together with standard Next.js i18n:
module.exports = withNextra({
i18n: {
locales: ['en', 'zh', 'jp', 'pt', 'tr', 'es', 'it', 'fr', 'kr', 'ca', 'fi', 'ru', 'de', 'ar'],
defaultLocale: 'en',
},
// ...
})
The middleware is a single re-export: export { locales as middleware } from 'nextra/locales'. Nextra handles locale detection and redirect logic entirely — they didn't have to write any routing logic themselves.
One feature I didn't expect to find: a CopyPageDropdown component (components/CopyPageDropdown.tsx) that adds a "Copy page" button to every English content page. Click it and you get three options — copy as Markdown, Open in Claude, or Open in ChatGPT. The "Open in Claude" path constructs a prompt like:
Read from https://www.promptingguide.ai/techniques/cot.md so I can ask questions about it.
The backend for the Markdown copy is pages/api/getPageContent.ts. It takes a pagePath query param, fetches the raw .en.mdx from the GitHub main branch directly, then strips imports, export statements, and frontmatter with a few regex passes before returning the clean content:
content = content.replace(/^import\s+.*?from\s+['"].*?['"];?\s*$/gm, '');
content = content.replace(/^export\s+.*?;?\s*$/gm, '');
content = content.replace(/^---\s*\n[\s\S]*?\n---\s*\n/m, '');
Practical, if a bit brittle — anything that doesn't match those patterns stays in. But it works for the use case: getting clean Markdown into an LLM context.
The actual technique pages are thorough. The CoT page (pages/techniques/cot.en.mdx) shows the original few-shot CoT example from Wei et al. (2022), then zero-shot CoT (just appending "Let's think step by step"), then Auto-CoT. Each section links to the source paper. That's the real value — the curation of which papers to cite and how to sequence the examples.
Using it
The site runs on promptingguide.ai. If you want to run it locally:
git clone https://github.com/dair-ai/Prompt-Engineering-Guide
cd Prompt-Engineering-Guide
pnpm i
pnpm dev
# http://localhost:3000
The notebook for the original lecture is at notebooks/pe-lecture.ipynb — it covers the basics with runnable OpenAI API calls. More useful as a teaching artifact than a production reference, but the progression is clean.
For actual prompting patterns, the pages are the content. The CoT technique page shows this zero-shot example directly:
Prompt: I went to the market and bought 10 apples. I gave 2 to the neighbor and 2 to the repairman.
I then bought 5 more and ate 1. How many apples do I have left?
Without CoT: 11 (incorrect)
With "Let's think step by step":
I started with 10 apples.
I gave away 2 + 2 = 4. I had 10 - 4 = 6.
I bought 5 more: 6 + 5 = 11.
I ate 1: 11 - 1 = 10 apples. (correct)
Rough edges
The content authoring model has a scaling problem. Every new technique requires creating the .en.mdx plus translations for 13 other locales. Looking at the file listing, some newer technique files (meta-prompting.en.mdx, reflexion.en.mdx) exist only in English — no translation variants. The sidebar metadata files will need updating each time a new page is added, and it's not obvious which locales are behind.
The package.json still pins next: "^13.5.6" and React 18 — the site is three major Next.js versions behind at this point. No test suite. The pages/api/getPageContent.ts regex cleanup is naive enough that MDX with unusual import patterns would break it.
The repo's package.json has author: "Shu Ding" — it was bootstrapped from the Nextra docs template and the field was never updated. Small detail, but it tells you where the scaffolding came from.
Commit activity is still alive (last commit March 2026), mostly UI updates and new course promotions. The core content additions have slowed compared to the 2023 velocity, which makes sense — the foundational techniques don't change that fast.
Bottom line
Use this as a structured reference when you need to understand what a specific technique (CoT, ToT, ReAct, RAG, APE) actually does and where it comes from. The paper citations are the killer feature — you can follow a chain from a technique to the original research in one click. If you're building prompt logic into a product, this is a faster orientation than reading papers directly.
