Next.js announces new approaches to support AI Agents by treating them as first-class users
Kawin Suangkaew

In an era where AI has become a crucial tool for web developers, Next.js has announced new approaches to specifically support AI Agents, which will completely change how we develop websites.
The web development industry is undergoing a major transformation as AI Coding Agents like Cursor, Claude Code, and Lovable become primary tools for writing code. According to TechCrunch, companies like Lovable were able to generate $100 million in revenue in just one month with only 146 employees, demonstrating the potential of AI to enhance development efficiency.
Next.js, as a leading React Framework, has recognized this shift and initiated various projects to make the Framework work better specifically with AI Agents.
Earlier this year, the Next.js team was working on improving DevTools when they noticed a pattern. Developers would see an error in the browser, copy the details, paste them into an AI editor, and ask the agent to fix it.
The core problem was that agents could not see the browser. Runtime errors, client-side warnings, and rendered components were all invisible to them. When a user said fix the error, the agent did not know what error they meant.
Key Insight: The Next.js teams initial response was updating the copy button to capture structured error data, then adding a feature that forwards browser logs to the terminal.
This led to an ambitious idea: what if we built an agent directly inside Next.js that worked like smart DevTools?
They built an in-browser chat agent called Vector. Similar to react-grab but integrated with Next.js, Vector let you select elements on the page, see their source code, and prompt for changes. It had Next.js best practices baked in to help agents avoid hallucination.
Vector was useful, but it overlapped with general coding agents like Cursor and Claude Code. Most developers were already using those tools for all of their projects anyway, not just Next.js.
Lesson Learned: Even though Vector was sunsetted, what made it useful (structured visibility and framework-specific knowledge) was taken and built directly into Next.js itself.
Around the Next.js v16 release in October 2025, users were struggling to debug with agents. The common prompt was fix the error, asking agents to resolve issues from the browser overlay. But agents would request the page HTML and find nothing wrong.
Runtime failures, browser JavaScript errors, and async errors all lived in the browser, not in the HTML. The rendered page, layout segments, routes, and other internal state were invisible to agents.
MCP (Model Context Protocol) gave us a way to expose this data. The first version surfaced internal states like errors, routes, and rendered segments, but exposing data alone was not enough. Agents also needed to discover running dev servers and communicate with them, which led to next-devtools-mcp.
The MCP also packages prompts and tools to help with upgrades and cache component migrations.
MCP confirmed what Vector taught us. Agents need visibility into what Next.js is doing, but thats only part of the story. The deeper lesson was treating agents as first-class users of Next.js and thinking from their perspective. What information do they need? When do they need it? How do they consume it?
This mindset led to practical changes:
They are now working on making this easier to adopt. You can already run npx @next/codemod to generate an up-to-date docs index for your project, and they are expanding their eval suite to cover more Next.js 16 APIs so they can measure what actually helps agents.
Longer term, they want this built into next dev so agents get the right context automatically without any setup.
Summary: Next.js is revolutionizing how we work with AI Agents by treating them as first-class users with better visibility, embedded knowledge, and automatic discovery. This is the future of web development where AI and humans work together seamlessly.