Llama 4 Maverick 17B-128E is Llama 4's largest and most capable model. It uses the Mixture-of-Experts (MoE) architecture and early fusion to provide coding, reasoning, and image capabilities.
import { streamText } from 'ai'
const result = streamText({ model: 'meta/llama-4-maverick', prompt: 'Why is the sky blue?'})Try out Llama 4 Maverick 17B 128E Instruct by Meta. Usage is billed to your team at API rates. Free users get $5 of credits every 30 days, and you are considered a free user if you haven't made a payment.
Chat with
Powered by AI Gateway
| Model |
|---|
Context | Max Output | Latency | Throughput | Input | Output | Cache | Image Gen | Video Gen | Web Search | Capabilities | Providers |
|---|