R1 Distill Qwen 32B (free) Check detailed information and pricing for AI models

Context Length 16,000 tokens, deepseek from provided

16,000
Context Tokens
Free
Prompt Price
Free
Output Price
1/16
Feature Support

Model Overview

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Basic Information

Developer
deepseek
Model Series
Qwen
Release Date
2025-01-29
Context Length
16,000 tokens
Max Completion Tokens
16,000 tokens
Variant
free

Pricing Information

This model is free to use

Data Policy

Supported Features

Supported (1)

Reasoning

Unsupported (15)

Image Input
Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#140
Out of 346 total models
1.4B
Total Tokens Last 30 Days
47.85M
Daily Average Usage
81%
Weekly Usage Change

Usage Trend for the Last 30 Days

Models by Same Author (deepseek)

R1 Distill Qwen 7B
131,072 tokens
$0.10 / $0.20
Deepseek R1 0528 Qwen3 8B (free)
131,072 tokens
Free
Deepseek R1 0528 Qwen3 8B
131,072 tokens
$0.05 / $0.10
R1 0528 (free)
163,840 tokens
Free
R1 0528
128,000 tokens
$0.50 / $2.15