Mistral Small 3 (free) Check detailed information and pricing for AI models

Context Length 32,768 tokens, mistralai from provided

32,768
Context Tokens
Free
Prompt Price
Free
Output Price
9/16
Feature Support

Model Overview

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)

Basic Information

Developer
mistralai
Model Series
Mistral
Release Date
2025-01-30
Context Length
32,768 tokens
Variant
free

Pricing Information

This model is free to use

Data Policy

Terms of Service

학습 정책

1

Supported Features

Supported (9)

Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Min P
Logit Bias
Logprobs
Top Logprobs

Unsupported (7)

Image Input
Response Format
Tool Usage
Structured Outputs
Reasoning
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#255
Out of 345 total models
89.86M
Total Tokens Last 30 Days
3.00M
Daily Average Usage
4%
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (mistralai)

Magistral Small 2506
40,000 tokens
$0.50 / $1.50
Magistral Medium 2506
40,960 tokens
$2.00 / $5.00
Magistral Medium 2506 (thinking)
40,960 tokens
$2.00 / $5.00
Devstral Small (free)
131,072 tokens
Free
Devstral Small
128,000 tokens
$0.06 / $0.12