Mixtral 8x7B Instruct Check detailed information and pricing for AI models

Context Length 32,768 tokens, mistralai from provided

32,768
Context Tokens
$0.08
Prompt Price
$0.24
Output Price
8/16
Feature Support

Model Overview

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Basic Information

Developer
mistralai
Model Series
Mistral
Release Date
2023-12-10
Context Length
32,768 tokens
Max Completion Tokens
16,384 tokens
Variant
standard

Pricing Information

Prompt Tokens
$0.08 / 1M tokens
Completion Tokens
$0.24 / 1M tokens

Supported Features

Supported (8)

Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Tool Usage

Unsupported (8)

Image Input
Logit Bias
Logprobs
Top Logprobs
Structured Outputs
Reasoning
Web Search Options
Top A

Actual Usage Statistics

#68
Out of 346 total models
9.3B
Total Tokens Last 30 Days
308.80M
Daily Average Usage
39%
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (mistralai)

Magistral Small 2506
40,000 tokens
$0.50 / $1.50
Magistral Medium 2506
40,960 tokens
$2.00 / $5.00
Magistral Medium 2506 (thinking)
40,960 tokens
$2.00 / $5.00
Devstral Small (free)
131,072 tokens
Free
Devstral Small
128,000 tokens
$0.06 / $0.12

Similar Price Range Models

Qwen3 14B
qwen
40,960 tokens
$0.06 / $0.24
Nova Lite 1.0
amazon
300,000 tokens
$0.06 / $0.24
Qwen3 30B A3B
qwen
40,960 tokens
$0.08 / $0.29
R1 Distill Qwen 7B
deepseek
131,072 tokens
$0.10 / $0.20
Gemma 3 27B
google
131,072 tokens
$0.10 / $0.20