Next.js announces a new vision for supporting AI coding agents directly, with MCP integration and new tools that help agents see and fix issues in projects more effectively
Kawin Suangkaew

In today's web development landscape, AI agents have become essential tools for developers. But the critical problem is that agents can't "see" what's happening in the browser. Next.js decided to fix this directly.
The Next.js team discovered that in real life, developers often see an error in the browser and copy the details to paste into an AI editor to ask the agent to fix it. The problem is that agents can't see the browser
Runtime errors, client-side warnings, and rendered components are all invisible to agents. When a user says "fix the error," the agent doesn't know what error they mean.
"Agents need visibility into what Next.js is doing. That's the key."
The Next.js team built an in-browser agent called Vector, which worked like smart devtools. It let users select elements on the page, see their source code, and prompt for changes directly.
Vector had Next.js best practices baked in to help agents avoid hallucination. But ultimately it was sunset because it overlapped with general coding agents like Cursor and Claude Code.
"We took what made Vector useful (structured visibility and framework-specific knowledge) and built those into Next.js itself"
Model Context Protocol (MCP) is what makes Next.js state visible to agents. The first version surfaced internal states like errors, routes, and rendered segments.
But exposing data alone wasn't enough. Agents also needed to discover running dev servers and communicate with them, which led to next-devtools-mcp
MCP confirmed what Vector taught us, but the deeper lesson was treating agents as first-class users of Next.js and thinking from their perspective:
These questions led to practical changes:
agents.md
The team is working on making this easier to adopt. You can already run npx @next/codemod to generate an up-to-date docs index, and they're expanding their eval suite to measure what actually helps agents.
This is a significant shift for Next.js that reflects a major trend in the industry: AI agents are no longer just coding assistance tools—they're becoming real users of development tools
What's interesting is the shift from building UI for agents (Vector) to making the framework itself visible. This is a more sustainable approach because any agent can benefit from this.
Next.js is laying the foundation for a future where AI agents work deeply with frameworks by making agents first-class citizens of the development experience. Now we can debug in a tight loop between code, runtime, and AI.
Next question: Do you think other frameworks will follow this approach? And how should developers prepare for the "agentic future"?