OpenUI: A Custom Language That Makes LLMs Generate UI 67% Cheaper

March 12, 2026

|repo-review

by Florian Narr

OpenUI: A Custom Language That Makes LLMs Generate UI 67% Cheaper

What it does

OpenUI is a full-stack framework for generative UI — the idea that an LLM should output structured UI components, not just text. It provides a custom language (OpenUI Lang), a React runtime with built-in component libraries, and chat interfaces. The central bet: a purpose-built language is cheaper and more streamable than JSON.

Why I starred it

Every generative UI approach I've seen uses JSON. Vercel's AI SDK renders JSON objects. Anthropic's artifacts are HTML strings. OpenUI went a different direction — they designed a language from scratch. That's a bold move. The benchmark claims (up to 67% fewer tokens than Vercel's JSON format) are what caught my eye, but the real draw is the engineering: a hand-rolled lexer, a streaming parser that auto-closes brackets on partial input, and a reactive runtime with $bindings. This isn't a wrapper around existing tools. It's a full language implementation.

How it works

The core lives in packages/lang-core. It's a clean pipeline: source text goes through a lexer, then a statement splitter, then an expression parser, and finally into an AST that the runtime evaluates.

The language itself looks like this (from benchmarks/samples/contact-form.oui):

root = Stack([title, form], "column", "l")
title = TextContent("Contact Us", "large-heavy")
nameField = FormControl("Name", Input("name", "Your full name", "text", ["required", "minLength:2"]))

Arguments are positional, references are hoisted (you can use a name before defining it), and $variables give you reactive state. It reads like a declarative DSL, not like JSON soup.

The lexer

packages/lang-core/src/parser/lexer.ts is a hand-rolled single-pass tokenizer. No regex, no generator libraries — just a while loop walking through characters. It distinguishes between lowercase identifiers (references), PascalCase identifiers (component names), and $-prefixed state variables at the token level. The token types use a const enum so TypeScript inlines them to numeric values at compile time — zero runtime overhead for token discrimination.

Streaming with auto-close

The streaming story is where it gets clever. In statements.ts, the autoClose() function takes partial input (the kind you get mid-stream from an LLM) and closes any unclosed strings and brackets:

export function autoClose(input: string): { text: string; wasIncomplete: boolean } {
  const stack: string[] = [];
  let inStr: false | '"' | "'" = false;
  let esc = false;
  // ... walks input, tracks bracket/quote depth
  // appends matching close chars at the end
}

This means the parser can produce a valid AST on every chunk of streaming output. The UI shell renders immediately when root = Stack([...]) arrives, and child components fill in as their definitions stream in. The prompt generator in prompt.ts actually enforces this — it tells the LLM to write root first for "optimal streaming."

The prompt generator

prompt.ts is the largest file in the package and arguably the most interesting. It takes a PromptSpec (your component library, tools, options) and generates a complete system prompt that teaches the LLM how to write OpenUI Lang. It auto-generates component signatures from Zod schemas, renders tool specs with input/output types, and includes sections for Query/Mutation patterns, Action wiring, and interactive filters.

The function generatePrompt() assembles sections conditionally — edit mode rules, inline mode, tool workflows — based on feature flags. It's a code-generated prompt, not a static template. That's a pattern I keep seeing in serious LLM tooling: the system prompt is a build artifact.

Reactive runtime

The store in runtime/store.ts is minimal — a Map<string, unknown> with shallow object comparison and a listener set. When a $binding changes, queries that reference it automatically re-fetch. The createStore() function has a deliberate design choice in initialize(): it never deletes existing state during streaming, because declarations can temporarily disappear as tokens arrive. That's the kind of edge case you only find by actually using your own streaming parser.

Using it

npx @openuidev/cli@latest create --name my-app
cd my-app
echo "OPENAI_API_KEY=sk-..." > .env
npm run dev

The CLI scaffolds a Next.js app with streaming, the default component library, and OpenUI Lang support wired up. You define your component library with Zod schemas via defineComponent(), and the framework generates the LLM's system prompt from those schemas. The LLM writes OpenUI Lang, the Renderer parses and renders it progressively.

Rough edges

The dependency on Zod 4 is early — Zod 4 is still relatively fresh and the ecosystem hasn't fully caught up. The @modelcontextprotocol/sdk peer dependency is optional but hints at ambitions that aren't fully documented yet (there's a runtime/mcp.ts but the docs are thin on MCP integration).

The test suite exists but only covers the parser — I didn't find integration tests for the streaming renderer or the reactive store. For a project that emphasizes streaming correctness, that's a gap.

The prompt generator in prompt.ts is over 400 lines of string concatenation. It works, but it's the kind of code that becomes hard to maintain as the language evolves. Every new feature means touching multiple section generators and hoping the generated prompt stays coherent.

The project is very active (commits daily as of April 2026, 3.1k stars), but it's at version 0.1.1. The API surface is still shifting — a recent PR added OpenUI Lang v0.5 with a new Query system and $bindings.

Bottom line

If you're building an AI product where the model needs to generate interactive UI — dashboards, forms, data views — OpenUI is worth evaluating. The token efficiency gains are real (the benchmarks are reproducible in benchmarks/), and the streaming architecture is thoughtfully built. Just know you're adopting a pre-1.0 language with a moving spec.

thesysdev/openui on GitHub
thesysdev/openui