Llama 4 Scout (free) Check detailed information and pricing for AI models

Context Length 128,000 tokens, meta-llama from provided

128,000
Context Tokens
Free
Prompt Price
Free
Output Price
12/16
Feature Support

Model Overview

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens. Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

Basic Information

Developer
meta-llama
Model Series
Llama4
Release Date
2025-04-05
Context Length
128,000 tokens
Variant
free

Pricing Information

This model is free to use

Data Policy

Terms of Service

학습 정책

1

Supported Features

Supported (12)

Image Input
Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Logprobs
Top Logprobs
Structured Outputs

Unsupported (4)

Tool Usage
Reasoning
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#118
Out of 346 total models
2.5B
Total Tokens Last 30 Days
82.14M
Daily Average Usage
17%
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (meta-llama)

Llama 3.3 8B Instruct (free)
128,000 tokens
Free
Llama Guard 4 12B
163,840 tokens
$0.05 / $0.05
Llama 4 Maverick (free)
128,000 tokens
Free
Llama 4 Maverick
1,048,576 tokens
$0.15 / $0.60
Llama Guard 3 8B
131,072 tokens
$0.02 / $0.06