R1 Distill Qwen 7B Check detailed information and pricing for AI models
Context Length 131,072 tokens, deepseek from provided
131,072
Context Tokens
$0.10
Prompt Price
$0.20
Output Price
2/16
Feature Support
Efficiency #1
Model Overview
DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larger models while maintaining smaller inference costs.
Basic Information
Developer
deepseek
Model Series
Qwen
Release Date
2025-05-30
Context Length
131,072 tokens
Variant
standard
Pricing Information
Prompt Tokens
$0.10 / 1M tokens
Completion Tokens
$0.20 / 1M tokens
Data Policy
Supported Features
Supported (2)
Seed
Reasoning
Unsupported (14)
Image Input
Top K
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Web Search Options
Top A
Actual Usage Statistics
#196
Out of 346 total models
330.11M
Total Tokens Last 30 Days
17.37M
Daily Average Usage
78%
Weekly Usage Change