MiniMax M1 (extended) Check detailed information and pricing for AI models

Context Length 256,000 tokens, minimax from provided

256,000
Context Tokens
Free
Prompt Price
Free
Output Price
11/16
Feature Support

Model Overview

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

Basic Information

Developer
minimax
Model Series
Other
Release Date
2025-06-17
Context Length
256,000 tokens
Variant
extended

Pricing Information

This model is free to use

Data Policy

Terms of Service

학습 정책

1

Supported Features

Supported (11)

Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Reasoning

Unsupported (5)

Image Input
Response Format
Structured Outputs
Web Search Options
Top A

Other Variants

Actual Usage Statistics

No recent usage data available.

Models by Same Author (minimax)

MiniMax-01
1,000,192 tokens
$0.20 / $1.10