The Next.js team reveals their journey toward an AI agent-driven future, with MCP technology making the framework visible to AI.
Kawin Suangkaew
The web development industry is undergoing a major transformation as AI Agents become essential tools for coding. Recently, the Next.js team revealed they have spent over a year improving the AI Agent experience with the Framework, and the key lesson learned is treating AI Agents as primary users rather than just supplementary tools.
Early in 2026, while developing DevTools, the Next.js team noticed an interesting pattern. Developers would see an Error in the Browser, copy the details, paste them into an AI Editor, and ask the Agent to fix it.
The problem is that Agents cannot see the Browser. Runtime Errors, Client-side Warnings, and Rendered Components are all invisible to Agents. When users say fix this error, the Agent does not know which error they mean.
The team responded by building Vector, an in-browser Chat Agent that works like smart DevTools. Similar to react-grab but integrated with Next.js, Vector let you select elements on the page, see their source code, and prompt for changes. It had Next.js best practices baked in to help Agents avoid hallucination.
Vector was useful, but it overlapped with general coding Agents like Cursor and Claude Code that most developers were already using for all their projects, not just Next.js. The UI selection made it easy to point at exactly what you wanted to change, but it was not something people needed every day.
The team decided to sunset Vector, but took what made it useful, structured visibility and framework-specific knowledge, and built those into Next.js directly.
Around the Next.js v16 release in October 2025, users were struggling to debug with Agents. The common prompt was fix the error, asking Agents to resolve issues from the Browser overlay. But Agents would request the page HTML and find nothing wrong.
The reason: Runtime failures, Browser JavaScript errors, and async errors all live in the Browser, not in HTML. The rendered page, layout segments, routes, and other internal state were invisible to Agents.
Model Context Protocol (MCP) gave us a way to expose this data. The first version surfaced internal states like errors, routes, and rendered segments, but exposing data alone was not enough. Agents also needed to discover running Dev Servers and communicate with them, which led to next-devtools-mcp.
MCP confirmed what Vector taught us. Agents need visibility into what Next.js is doing, but that is only part of the story. The deeper lesson was treating Agents as first-class users and thinking from their perspective. What information do they need? When do they need it? How do they consume it?
This mindset led to practical changes: If Agents read terminal output during development, logging Server Action invocations and forwarding Browser errors gives them the hints they need. If Agents struggle with Framework concepts not in their training data, embedding a compressed docs index (agents.md) or providing structured workflows (Next.js Skills) gives them better context than documentation alone.
These questions run through everything they built. The need for visibility led to better logging, the need for knowledge led to agents.md, and the need for discovery led to MCP. When you treat Agents as first-class users and meet them where they are, debugging becomes a tight feedback loop between code, runtime, and AI.
The team is now working on making this easier to adopt. You can already run npx @next/codemod to generate an up-to-date docs index for your project, and they are expanding their Eval Suite to cover more Next.js 16 APIs so they can measure what actually helps Agents.
Longer term, they want this built into next dev so Agents get the right context automatically without any setup.
Next.js journey toward an Agentic future teaches us that Frameworks do not need to compete with AI Tools, they can be partners instead, by exposing the right information to Agents and enabling humans to work with AI more effectively.