May 4, 2026
Five Models, One React Stack: Why Every LLM Builds the Same App
Claude, GPT, Gemini, DeepSeek, Qwen, v0, Lovable, Bolt. Ask any of them to build a React app and you get the same answer: Next.js or Vite, TypeScript, Tailwind, shadcn/ui. A look at the convergence, why it's happening, what we lose, and how to steer your way out of it.
Sascha Becker
Author18 min read

Ask any top-tier model to "build me a React app" with no further context, and the answer is roughly the same: React with Next.js or Vite, TypeScript, Tailwind CSS, shadcn/ui, TanStack Query, and Zustand. The model varies. The output barely does.
This is not a coincidence, and it is not a conspiracy. It is the predictable result of training data dominance, token economics, and the way modern coding agents are taught. The convergence is real, it is measurable, and it has consequences worth taking seriously.
The Convergence at a Glance
| Model / Tool | Frontend | Styling | Components | State / Data | Backend bias |
|---|---|---|---|---|---|
| Claude (Anthropic) | Next.js / Vite | Tailwind v4 | shadcn/ui | Zustand + TanStack Q. | Vercel, Supabase |
| GPT / Codex (OpenAI) | Next.js | Tailwind or CSS | mixed | mirrors project | flexible |
| Gemini (Google) | Vite | Tailwind | mixed | mirrors project | Firebase, Node |
| DeepSeek | Next.js | Tailwind | shadcn/ui | TanStack Query | flexible |
| Qwen Coder | Next.js | Tailwind | shadcn/ui | TanStack Query | flexible |
| v0 (Vercel) | Next.js | Tailwind | shadcn/ui | TanStack Query | Vercel, Supabase |
| Lovable | Vite | Tailwind | shadcn/ui | TanStack Query | Supabase |
| Bolt (StackBlitz) | varies | Tailwind | mixed | mirrors project | Supabase |
Eight rows. One UI layer. The styling column is a single answer. The component column is mostly one answer.
The Default Stack, by Model
Claude (Anthropic)
The most opinionated of the frontier models. Anthropic's own web-artifacts-builder skill explicitly teaches Claude to use "React, Tailwind CSS and shadcn/ui" for artifacts1, and the shadcn organization ships an official Claude Skill that runs shadcn info --json to feed Claude the project's component registry on every turn2. That is first-party tooling, treating Claude as a primary consumer.
Without an explicit design system in the prompt, Claude defaults to a recognizable visual style: rounded corners, Inter or system-sans, muted grays, a blue primary action button, card layouts. Claude's own brand uses warm terracotta on cream. Generated UIs do not inherit that.
GPT / Codex (OpenAI)
The least dogmatic of the bunch. Codex CLI's documentation makes AGENTS.md the primary mechanism for stack preference. There is no hardcoded React framework default in the agent itself. Without a project, GPT still produces React + Tailwind, but it is more likely to reach for plain CSS, CSS Modules, or styled-components than Claude is, and observed outputs lean toward older patterns: Jest over Vitest, function components written like it is 2022.
GPT mirrors what the project already contains. Drop it into a Vue codebase and it writes Vue. The default shows up only when there is no project to mirror.
Gemini (Google) and Antigravity
Google AI Studio's Build Mode documentation states React as the default for the client side, with a Node.js runtime on the server. Antigravity, Google's VS Code fork for agentic development, ships example stacks built around React + Vite + React Router DOM. Tailwind shows up frequently in Antigravity templates, but not as universally as in Claude's outputs. Firebase as the backend is a stronger pull than shadcn as the UI layer.
DeepSeek and Qwen
The interesting case, because they are open-weight models trained primarily by Chinese labs. They converge on the exact same defaults: Next.js, TypeScript, Tailwind, shadcn/ui. DeepSeek Artifacts is built around it. Qwen 2.5 Coder produces it. The training data overlap with the Western open-source ecosystem is too strong to escape, even when the lab and culture are different.
This is the cleanest evidence that the stack is a property of the corpus, not a property of any one company's preferences.
v0 (Vercel), Lovable, Bolt
The AI builders. v0 is fully stack-locked: everything it generates uses React, Next.js, Tailwind, and shadcn. There is no Vue mode and no plain CSS mode. Lovable defaults to React + Vite + Tailwind + shadcn + Supabase. Bolt markets itself as multi-framework but its templates and demos lean React + Tailwind in practice.
These products do not just default to the stack. They are the stack, productized.
Why This Convergence Is Happening
Three reinforcing forces.
1. Token economics favors Tailwind
LLMs read tokens, not files. Tailwind utility classes (flex, justify-center, p-4) are atomic, frequent, predictable tokens. Compare that to CSS-in-JS, where the model has to invent a class name like .card-wrapper-inner, reference it from JSX, and keep the two in sync across files.
For a token-predicting model, Tailwind is the path of least resistance:
- Co-location. JSX and styling live in one chunk. Fewer cross-file dependencies inside the context window.
- No naming. No risk of hallucinating a class name that does not exist.
- High training frequency. Tailwind class strings appear millions of times in GitHub repos, blog posts, and tutorials.
CSS Modules, BEM, vanilla-extract, styled-components, Panda CSS: all valid, all harder for an LLM to keep coherent across a generation. Tailwind wins not because it is technically superior, but because it aligns with how the model itself processes information3.
2. shadcn/ui is "open code", which is exactly what LLMs need
Traditional component libraries (Material UI, Chakra, Mantine) ship as compiled packages. The model has to know the prop API, the theme system, the slot patterns. When the API changes between major versions, training data goes stale.
shadcn/ui inverts this. Components are copied into your project as plain JSX with Tailwind classes. There is no opaque library to memorize. The model reads the code, modifies it, ships it. shadcn even publishes an MCP server and an official Claude Skill so the agent can ask "what components are installed in this project?" in real time. That is a UX optimization for LLM consumers, not for human developers.
3. Training data dominance compounds
The most popular React stack on GitHub becomes the most-trained-on stack. The most-trained-on stack becomes the default LLM output. The default LLM output gets pushed back to GitHub as new code. Future models train on it. The cycle tightens.
Tailwind crossed 75 million monthly downloads in early 2026. shadcn passed 60K GitHub stars. Once a stack reaches that critical mass in training data, alternatives get statistically buried.
The Monoculture Flywheel

This is the part of the post that matters more than the rest. The convergence is not just an aesthetic complaint or an "old man yells at cloud" gripe. It is a structural change in how the web platform evolves, and the dynamics are self-reinforcing in a way that previous platform shifts were not.
There is a flywheel running, and every turn tightens it:
- A stack becomes popular. React + Tailwind + shadcn, in 2024-2025.
- GitHub fills with examples of that stack. Other stacks shrink in relative training mass.
- LLMs train on the corpus and learn the popular stack disproportionately well.
- Developers ship LLM output to GitHub. The popular stack now has even more representation.
- The next model trains on a corpus that is even more skewed than the last.
- Goto 1, with the gap widening each cycle.
This is not how previous frontend shifts worked.
jQuery to React was driven by humans evaluating tradeoffs. React to Tailwind was driven by humans evaluating tradeoffs. The next shift, whatever it is, is now mediated by a system whose vote is weighted by how much code already exists in the previous winner. The optimization function has been swapped out from under the ecosystem.
Vue, Svelte, Solid, Angular: the training-mass problem
These frameworks are not bad. They are not declining in quality. They are declining in visibility to LLMs, which is increasingly the same thing as declining in adoption.
A new Vue 3.5 reactivity feature might ship with perfect documentation. If LLMs do not see it in training data, the average Vue developer using an LLM will get React-flavored Vue back. The feature might as well not exist.
Svelte 5 runes are a genuinely novel reactivity model. They appear in training data at a fraction of the rate useState does. Ask an LLM "give me a counter in Svelte" and you will often get Svelte 4 patterns back, because that is the statistical average of what it has read.
Angular has a large enterprise footprint, but the public open-source corpus is dominated by React. LLMs reflect the public corpus, not the enterprise reality.
The asymmetry compounds. The more developers reach for LLMs as their first move, the more the practical experience of writing Vue or Svelte degrades, even when the framework itself improves.
Version lag is now permanent
Pre-LLM, version lag was a function of how fast docs and tutorials spread. A library could ship v2 and within a year, the ecosystem caught up.
Post-LLM, version lag is a function of training-corpus composition. Even a 2026 frontier model trained on 2025 data has seen ten years of useEffect + fetch and maybe two years of TanStack Query. It has seen a decade of manual useMemo and one year of the React Compiler. The "modern way" loses the popularity contest inside the model.
Pre-LLM: "how do I build a web app?" returned thousands of credible answers. A beginner had to choose, which was painful, but they at least encountered the choice. They learned that Vue existed. They learned that CSS-in-JS was a debate. They formed taste.
Post-LLM: the model picks for you, silently, with no acknowledgment that it picked. A first-time developer who prompts "build me a todo app" gets React + Tailwind + shadcn and never knows another path was available. They did not make a choice. A choice was made on their behalf and presented as the only option.
This is a worse failure mode than "beginners pick the wrong stack." It is "beginners do not know stacks are a category." The barrier to discovering alternatives now requires prior knowledge that alternatives exist, which is precisely the knowledge a beginner does not have.
The "AI slop" aesthetic is the same problem at the design layer
When a model is given no design constraints, it produces a recognizable look: rounded corners, Inter, muted grays, blue primary button, card grid. It is not bad design. It is invisible design. Every AI-built landing page in 2026 looks like every other AI-built landing page4.
This is the same training-data-bias mechanic, applied to color tokens and font stacks instead of imports. Brand differentiation now requires you to actively fight the model. Specificity is the only defense.
What the long tail loses
The frameworks that survive a monoculture are the ones with strong corporate backing or aggressive LLM-era investment: shadcn publishing a Claude Skill, Chakra publishing an MCP server, Tailwind shipping an llms.txt file. The frameworks that lose are the small, opinionated, weird ones that previously gained traction through word of mouth and individual taste.
A future where the long tail of frontend tooling thins out is not a more efficient web. It is a less interesting one. And the cost of that thinning is paid in advance, by every developer who never learns there were other ways to build a UI5.
Steering the Model Away From Its Defaults
Telling a model "use Vue" once in a chat works for one prompt. Keeping it on Vue across a 200-message session, across new chats, across teammates, requires structural intervention. Here is the escalation ladder, ordered from cheapest and most fragile to strongest and most durable.
Level 1: In-prompt directives
Effective for a single artifact, fragile across a session. Use when you do not have a project yet.
- Be explicit about what NOT to use. Negative constraints are stronger than positive ones. "Use Mantine, do not use shadcn or Tailwind" beats "Use Mantine."
- Name the version. "React 19.2 with the React Compiler enabled" prevents regression to React 17 patterns the model has more training mass on.
- Specify the import. "Import from
@mantine/core, not@mui/material" disambiguates upfront. The model is less likely to drift if it sees the exact import path. - Repeat the constraint at the end. Long prompts dilute early instructions. A trailing "Remember: Mantine, no Tailwind" reinforces against attention decay.
Level 2: Project rules files
The format depends on the agent, but the pattern is identical. A markdown file at the repo root that the agent reads automatically on every turn.
| Agent | Filename |
|---|---|
| Claude Code | CLAUDE.md |
| Codex / OpenAI | AGENTS.md |
| Gemini CLI | GEMINI.md or AGENTS.md |
| Cursor | .cursorrules or .cursor/rules/*.mdc |
| Antigravity | AGENTS.md plus GEMINI.md overrides |
| Windsurf | .windsurfrules |
What actually works inside these files:
- Lead with the stack as a list. "Stack: Vue 3.5, Pinia, UnoCSS, Vitest." Models scan the top of the file first.
- Forbid by name, not by category. "Do not use Tailwind, shadcn, Material UI, or Chakra" is better than "Do not use a default UI library."
- Pin import paths and snippet patterns. Show one canonical example. The model will pattern-match against it for the rest of the session.
- Add a "common mistakes" section. Document the exact regressions you have seen. "Do not generate
useStatefor server data; use Pinia stores." That is how you train against muscle memory. - Keep it short. A 2000-line CLAUDE.md gets compressed away. Around 150 lines is the sweet spot.
Level 3: Examples in the codebase
Models lean harder on what they read in the project than what is in their training. If your repo contains five Vue SFCs, the next file will be a Vue SFC. If it contains five React + Tailwind components and you tell it to write Vue, it will hallucinate something halfway between.
- Seed the project with the canonical pattern before you let the agent loose. Hand-write the first component. Reference it.
- Delete or move legacy code that contradicts the target stack. A model that finds
tailwind.config.tswill assume Tailwind is fair game, even if CLAUDE.md says no. - Pin the package.json. Remove unused dependencies. The dependency list is a stronger signal than prose instructions.
Level 4: Skills and MCP servers
The heaviest hammer. Instead of telling the model what to do, you give it a tool that constrains how it can do it.
- Skills (Claude). A Skill can override the default behavior for a class of task. The shadcn Skill is the prototypical example. The inverse use is to write a Skill that enforces your component library and rejects shadcn imports.
- MCP servers. Material UI ships an official MCP server, and Chakra UI, Mantine, and others have followed. They supply the agent with the live component list, prop types, and usage examples. This bypasses training data entirely. Whatever the model "thinks" Material UI looks like, the MCP server is the ground truth.
- Custom registries (shadcn-style). If you want to escape vanilla shadcn but stay in the React + Tailwind world, publish a shadcn-compatible registry of your components. The CLI installs them with
npx shadcn add <your-url>. The model treats them as first-class.
Level 5: Tooling guardrails
The model will still slip. Catch it at the boundary.
- ESLint with
no-restricted-imports. Bantailwindcss,@radix-ui/*, whatever you do not want. The agent sees the error on the next loop iteration and corrects. - Pre-commit hooks. Prettier with a project-specific config. A custom check that fails on banned class names.
- CI as the final filter. A build that fails when forbidden patterns appear ensures the bad code never reaches main. For agentic loops, this is the most effective signal because the agent reads the failure and self-corrects.
What does not work
Politeness ("please don't use Tailwind") gets ignored. The model is not being rude; the prior is just stronger than the request. One-shot system prompts get compressed away in long sessions. Vague aesthetic language ("make it less generic") produces the same generic output with different colors. And never trust the model's claim that it followed your instructions. Verify the diff.
A practical escalation pattern
For most teams switching off the default stack, the order is:
- Write a tight CLAUDE.md or AGENTS.md (Level 2). 80% of the value, 20% of the effort.
- Hand-seed two or three reference components in the canonical pattern (Level 3).
- Add
no-restricted-importsfor the libraries you are escaping (Level 5). - Reach for Skills or MCP (Level 4) only if the agent keeps drifting after 1 through 3.
The goal is not to make the model perfect on the first try. It is to make every drift visible and correctable.
The Honest Conclusion
The default stack exists for real reasons. Tailwind is genuinely well-suited to LLM token economics. shadcn is genuinely well-suited to LLM-driven editing. React has genuinely won the market. None of this is wrong.
The problem is not the stack. The problem is that the choice has been frozen by the same systems that made the stack valuable. Every default we accept is a vote for the corpus that future models will train on. Every Vue project we abandon because "the AI is better at React" makes the AI even better at React next year.
If you want pluralism in your tools, you have to spend something to keep it. Hand-write the first Svelte component. Configure the AGENTS.md to forbid Tailwind. Publish your component registry. Push back on the path of least resistance, deliberately, on the projects where it matters.
The alternative is a web that all looks the same, written the same way, by the same model, forever. That is a worse outcome than any individual technology choice.
Sources
- AI Has A Favorite Tech Stack. That's A Problem!
Maximilian Schwarzmüller's argument that the default LLM stack reduces competition and innovation.
- Why LLMs are Addicted to Tailwind
Technical breakdown of why Tailwind's utility-class model fits LLM token prediction.
- Improving frontend design through Skills
Anthropic's official post on the web-artifacts-builder skill, treating React + Tailwind + shadcn as the assumed baseline.
- shadcn/ui Skills
The official Claude Skill from shadcn that runs shadcn info --json on every turn to feed Claude project context.
- v0 Documentation
Vercel's v0: everything generated uses React, Next.js, Tailwind, and shadcn. No alternatives.
- Lovable: Tech Stack and Next.js Support
Lovable's documented default stack: React + Vite + Tailwind + shadcn + Supabase.
- Build apps in Google AI Studio
Google AI Studio's Build Mode docs showing React as the default frontend.
- Codex CLI Configuration Reference
OpenAI Codex CLI documentation on AGENTS.md as the primary stack-preference mechanism.
- Claude Design: How to Build Branded Interfaces Without Generic AI Aesthetics
Why Claude defaults to rounded corners, Inter, muted grays, and blue primary buttons when given no constraints.
- AI-Driven Prototyping: v0, Bolt, and Lovable Compared
Addy Osmani's comparison of the major AI builder products and their default stacks.
- DeepSeek Artifacts
DeepSeek's public artifact tool, built around Next.js + TypeScript + Tailwind.
- Tailwind llms.txt Discussion
Tailwind's discussion on shipping a single-file documentation specifically for LLM consumption.
- Material UI MCP Server
Official Material UI MCP server: feeds agents the live component API and usage patterns to bypass training-data drift.
