Mistral Small 3 Check detailed information and pricing for AI models

Context Length 32,768 tokens, mistralai from provided

32,768
Context Tokens
$0.05
Prompt Price
$0.09
Output Price
9/16
Feature Support
roleplay #14

Model Overview

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)

Basic Information

Developer
mistralai
Model Series
Mistral
Release Date
2025-01-30
Context Length
32,768 tokens
Max Completion Tokens
32,768 tokens
Variant
standard

Pricing Information

Prompt Tokens
$0.05 / 1M tokens
Completion Tokens
$0.09 / 1M tokens

Supported Features

Supported (9)

Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Min P
Logit Bias
Logprobs
Top Logprobs

Unsupported (7)

Image Input
Response Format
Tool Usage
Structured Outputs
Reasoning
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#37
Out of 345 total models
35.5B
Total Tokens Last 30 Days
1.2B
Daily Average Usage
40%
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (mistralai)

Magistral Small 2506
40,000 tokens
$0.50 / $1.50
Magistral Medium 2506
40,960 tokens
$2.00 / $5.00
Magistral Medium 2506 (thinking)
40,960 tokens
$2.00 / $5.00
Devstral Small (free)
131,072 tokens
Free
Devstral Small
128,000 tokens
$0.06 / $0.12

Similar Price Range Models

Deepseek R1 0528 Qwen3 8B
deepseek
131,072 tokens
$0.05 / $0.10
Gemma 3 12B
google
131,072 tokens
$0.05 / $0.10
Phi 4 Multimodal Instruct
microsoft
131,072 tokens
$0.05 / $0.10
Qwen2.5 7B Instruct
qwen
32,768 tokens
$0.04 / $0.10