MiniMax M1 (extended) Check detailed information and pricing for AI models

Context Length 128,000 tokens, minimax from provided

128,000
Context Tokens
$0.55
Prompt Price
$2.20
Output Price
10/16
Feature Support

Model Overview

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

Basic Information

Developer
minimax
Model Series
Other
Release Date
2025-06-17
Context Length
128,000 tokens
Max Completion Tokens
40,000 tokens
Variant
extended

Pricing Information

Prompt Tokens
$0.55 / 1M tokens
Completion Tokens
$2.20 / 1M tokens

Supported Features

Supported (10)

Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Min P
Logit Bias
Tool Usage
Structured Outputs
Reasoning

Unsupported (6)

Image Input
Response Format
Logprobs
Top Logprobs
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#282
Out of 345 total models
41.00M
Total Tokens Last 30 Days
20.50M
Daily Average Usage
-
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (minimax)

MiniMax-01
1,000,192 tokens
$0.20 / $1.10

Similar Price Range Models

DeepSeek Prover V2
deepseek
131,072 tokens
$0.50 / $2.18
R1 0528
deepseek
128,000 tokens
$0.50 / $2.15
R1
deepseek
128,000 tokens
$0.45 / $2.15
Llama 3.1 Nemotron Ultra 253B v1
nvidia
131,072 tokens
$0.60 / $1.80