Llama 4 Maverick (free) Check detailed information and pricing for AI models

Context Length 128,000 tokens, meta-llama from provided

128,000
Context Tokens
Free
Prompt Price
Free
Output Price
6/16
Feature Support

Model Overview

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Basic Information

Developer
meta-llama
Model Series
Llama4
Release Date
2025-04-05
Context Length
128,000 tokens
Max Completion Tokens
4,028 tokens
Variant
free

Pricing Information

This model is free to use

Supported Features

Supported (6)

Image Input
Top K
Repetition Penalty
Response Format
Tool Usage
Structured Outputs

Unsupported (10)

Seed
Frequency Penalty
Presence Penalty
Min P
Logit Bias
Logprobs
Top Logprobs
Reasoning
Web Search Options
Top A

Other Variants

Actual Usage Statistics

No recent usage data available.

Models by Same Author (meta-llama)

Llama 3.3 8B Instruct (free)
128,000 tokens
Free
Llama 3.3 8B Instruct
128,000 tokens
$0.00 / $0.00
Llama Guard 4 12B
163,840 tokens
$0.18 / $0.18
Llama 4 Scout (free)
128,000 tokens
Free
Llama 4 Scout
1,048,576 tokens
$0.08 / $0.30