Vercel spent a year improving the AI agent experience in Next.js, from building an in-browser agent to shipping MCP integration, and learned the crucial lesson of treating agents as first-class users
Kawin Suangkaew

In the modern web development landscape where AI agents have become essential tools, Vercel spent a year perfecting Next.js to work better with AI coding agents, learning crucial lessons that every developer should know.
Earlier this summer, the Next.js team was working on improving devtools when they noticed a pattern. Developers would see an error in the browser, copy the details, paste them into an AI editor, and ask the agent to fix it.
The problem was that agents can't see the browser. Runtime errors, client-side warnings, and rendered components are all invisible to them. When a user says "fix the error," the agent doesn't know what error they mean.
Their first response was updating the copy button to capture structured error data. Then they added a feature that forwards browser logs to the terminal. These small fixes pointed toward a bigger realization: they needed to make Next.js itself visible to agents.
That led to an ambitious idea. What if they built an agent directly inside Next.js that worked like smart devtools?
They built an in-browser chat agent called Vector. Similar to react-grab but integrated with Next.js, Vector let you select elements on the page, see their source code, and prompt for changes. It had Next.js best practices baked in to help agents avoid hallucination.
Vector was useful, but it overlapped with general coding agents like Cursor and Claude Code that most developers were already using for all their projects, not just Next.js.
They sunset Vector, but took what made it useful (structured visibility and framework-specific knowledge) and decided to build those into Next.js itself.
Around the Next.js v16 release in October 2025, users were struggling to debug with agents. The common prompt was "fix the error," asking agents to resolve issues from the browser overlay. But agents would request the page HTML and find nothing wrong.
Runtime failures, browser JavaScript errors, and async errors all lived in the browser, not in the HTML. The rendered page, layout segments, routes, and other internal state were invisible to agents.
MCP (Model Context Protocol) gave them a way to expose this data. The first version surfaced internal states like errors, routes, and rendered segments, but exposing data alone wasn't enough. Agents also needed to discover running dev servers and communicate with them, which led to next-devtools-mcp.
The MCP also packages prompts and tools to help with upgrades and cache component migrations. There's a detailed talk about the MCP integration if you want to learn more.
MCP confirmed what Vector taught us. Agents need visibility into what Next.js is doing, but that's only part of the story. The deeper lesson was treating agents as first-class users of Next.js and thinking from their perspective. What information do they need? When do they need it? How do they consume it?
This mindset led to practical changes:
They're now working on making this easier to adopt. You can already run npx @next/codemod to generate an up-to-date docs index for your project, and they're expanding their eval suite to cover more Next.js 16 APIs so they can measure what actually helps agents.
Longer term, they want this built into next dev so agents get the right context automatically without any setup.
When you treat agents as first-class users and meet them where they are, debugging becomes a tight feedback loop between code, runtime, and AI.