Next.js has improved its AI agent experience through new MCP integration and tools that make the framework's state visible to AI agents, treating them as first-class users.
Kawin Suangkaew
Over the past year, the Next.js team has dedicated significant time to improving the AI agent experience. Along the way, they learned that the key to better agent support is thinking from the agent's perspective and making Next.js itself visible to agents.
Earlier this year, the team was working on improving Next.js devtools when they noticed a pattern. Developers would see an error in the browser, copy the details, paste them into an AI editor, and ask the agent to fix it.
The problem was that agents can't see the browser. Runtime errors, client-side warnings, and rendered components are all invisible to them. When a user says "fix the error," the agent doesn't know what error they mean.
Agents need visibility into what Next.js is doing - but more importantly, they need to be treated as first-class users of the framework.
That led to an ambitious idea: What if they built an agent directly inside Next.js that worked like smart devtools?
They built an in-browser chat agent called Vector. Similar to react-grab but integrated with Next.js, Vector let you select elements on the page, see their source code, and prompt for changes. It had Next.js best practices baked in to help agents avoid hallucination.
Vector was useful, but it overlapped with general coding agents like Cursor and Claude Code that most developers were already using for all their projects. The UI selection made it easy to point at exactly what you wanted to change, but it wasn't something people needed every day.
They sunset Vector, but took what made it useful - structured visibility and framework-specific knowledge - and decided to build those directly into Next.js.
Around the Next.js v16 release in October 2025, users were struggling to debug with agents. The common prompt was "fix the error," asking agents to resolve issues from the browser overlay. But agents would request the page HTML and find nothing wrong.
Runtime failures, browser JavaScript errors, and async errors all lived in the browser, not in the HTML. The rendered page, layout segments, routes, and other internal state were invisible to agents.
MCP (Model Context Protocol) gave them a way to expose this data. The first version surfaced internal states like errors, routes, and rendered segments. But exposing data alone wasn't enough. Agents also needed to discover running dev servers and communicate with them, which led to next-devtools-mcp.

The MCP also packages prompts and tools to help with upgrades and cache component migrations.
MCP confirmed what Vector taught them. Agents need visibility into what Next.js is doing, but that's only part of the story. The deeper lesson was treating agents as first-class users of Next.js and thinking from their perspective: What information do they need? When do they need it? How do they consume it?
This mindset led to practical changes:
When you treat agents as first-class users and meet them where they are, debugging becomes a tight feedback loop between code, runtime, and AI.
The team is now working on making this easier to adopt. You can already run npx @next/codemod to generate an up-to-date docs index for your project, and they're expanding their eval suite to cover more Next.js 16 APIs.
Longer term, they want this built into next dev so agents get the right context automatically without any setup.
They're eager to hear feedback and ideas on how to make Next.js work even better with agents.