The model comes with a 1M context window and built-in adaptive tool use. Qwen 3.5 Plus excels at agentic workflows, thinking, searching, and using tools across multimodal contexts, making it well-suited for web development, frontend tasks, and turning instructions into working code. Compared to Qwen 3 VL, it delivers stronger performance in scientific problem solving and visual reasoning tasks.
To use this model, set model to alibaba/qwen3.5-plus in the AI SDK:
import{ streamText }from'ai';
const result =streamText({
model:'alibaba/qwen3.5-plus',
prompt:
`Analyze this UI mockup, extract the design system,
and generate a production-ready React component
with responsive breakpoints and theme support.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Vercel CDN now supports the stale-if-error directive with Cache-Control headers, enabling more resilient caching behavior during origin failures.
You can now use the stale-if-error directive to specify how long (in seconds) a stale cached response can still be served if a request to the origin fails. When this directive is present and the origin returns an error, the CDN may serve a previously cached response instead of returning the error to the client. Stale responses may be served for errors like 500 Internal Server Errors, network failures, or DNS errors.
This allows applications to remain available and respond gracefully when upstream services are temporarily unavailable.
Browserbase is now available on the Vercel Marketplace, allowing teams to run browser automation for AI agents without managing infrastructure.
This integration connects agents to remote browsers over the Chrome DevTools Protocol (CDP), enabling workflows that require interacting with real websites, such as signing in to dashboards, filling out forms, or navigating dynamic pages.
With this one-click integration, teams benefit from unified billing and infrastructure designed for long-lived, stateful sessions. Key capabilities include:
Install and connect with a single API key
Connect agents to remote browsers over CDP
Reduce operational complexity for browser-based agent workflows
Also available today is support for Web Bot Auth for Browserbase, enabling agents to reliably browse Vercel-hosted deployments without interruption from security layers.
Get started with Browserbase on the Vercel Marketplace or try this example to see it in action.
M2.5 plans before it builds, breaking down functions, structure, and UI design before writing code. It handles full-stack projects across Web, Android, iOS, Windows, and Mac, covering the entire development lifecycle from initial system design through code review. Compared to M2.1, it adapts better to unfamiliar codebases and uses fewer search rounds to solve problems.
To use this model, set model to minimax/minimax-m2.5 in the AI SDK:
import{ streamText }from'ai';
const result =streamText({
model:'minimax/minimax-m2.5',
prompt:
`Design and implement a multi-tenant SaaS authentication system
with role-based access control, supporting OAuth providers
and API key management.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Any new deployment containing a version of the third-party package next-mdx-remote that is vulnerable to CVE-2026-0969 will now automatically fail to deploy on Vercel.
We strongly recommend upgrading to a patched version regardless of your hosting provider.
This automatic protection can be disabled by setting the DANGEROUSLY_DEPLOY_VULNERABLE_CVE_2026_0969=1 environment variable on your Vercel project. Learn more
You can now access GLM-5 via AI Gateway with no other provider accounts required.
GLM-5 from Z.AI is now available on AI Gateway. Compared to GLM-4.7, GLM-5 adds multiple thinking modes, improved long-range planning and memory, and better handling of complex multi-step agent tasks. It's particularly strong at agentic coding, autonomous tool use, and extracting structured data from documents like contracts and financial reports.
To use this model, set model to zai/glm-5 in the AI SDK:
import{ streamText }from'ai';
const result =streamText({
model:'zai/glm-5',
prompt:
`Generate a complete REST API with authentication,
database models, and test coverage for a task management app.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Vercel Sandbox can now enforce egress network policies through Server Name Indication (SNI) filtering and CIDR blocks, giving you control over which hosts a sandbox can reach. Outbound TLS connections are matched against your policy at the handshake, unauthorized destinations are rejected before any data is transmitted.
By default, sandboxes have unrestricted internet access. When running untrusted or AI generated code, you can lock down the network to only the services your workload actually needs. A compromised or hallucinated code snippet cannot exfiltrate data or make unintended API calls, traffic to any domain not on your allowlist is blocked.
The modern internet runs on hostnames, not IP addresses, a handful of addresses serve thousands of domains. Traditional IP-based firewall rules can't precisely distinguish between them.
Host-based egress control typically requires an HTTP proxy, but that breaks non-HTTP protocols like Redis and Postgres. Instead, we built an SNI-peeking firewall that inspects the initial unencrypted bytes of a TLS handshake to extract the target hostname. Since nearly all internet traffic is TLS-encrypted today, this covers all relevant cases. For legacy or non-TLS systems, we do also support IP/CIDR-based rules as a fallback.
Policies can be updated dynamically on a running sandbox without restarting the process. Start with full internet access to install dependencies, lock it down before executing untrusted code, reopen to stream results after user approval, and then air gap again with deny-all, fully in one session:
import{ Sandbox }from'@vercel/sandbox';
const sandbox =await Sandbox.create();
// Phase 1: Open network, download everything we need
Vercel Flags is a feature flag provider built into the Vercel platform. It lets you create and manage feature flags with targeting rules, user segments, and environment controls directly in the Vercel Dashboard.
The Flags SDK provides a framework-native way to define and use these flags within Next.js and SvelteKit applications, integrating directly with your existing codebase:
flags.ts
import{ vercelAdapter }from"@flags-sdk/vercel"
import{ flag }from'flags/next';
exportconst showNewFeature =flag({
key:'show-new-feature',
decide:()=>false,
description:'Show the new dashboard redesign',
adapter:vercelAdapter()
});
And you can use them within your pages like:
app/page.tsx
import{ showNewFeature }from'~/flags';
exportdefaultasyncfunctionPage(){
const isEnabled =awaitshowNewFeature();
return isEnabled ?<NewDashboard/>:<OldDashboard/>
;}
For teams using other frameworks or custom backends, the Vercel Flags adapter supports the OpenFeature standard, allowing you to combine feature flags across various systems and maintain consistency in your flag management approach:
Vercel Flags is priced at $30 per 1 million flag requests ($0.00003 per event), where a flag request is any request to your application that reads the underlying flags configuration. A single request evaluating multiple feature flags of the same source project still counts as one flag request.
Vercel Flags is now in beta and available to teams on all plans.