DeepSeek V3.1 (free) Check detailed information and pricing for AI models

Context Length 64,000 tokens, deepseek from provided

64,000
Context Tokens
Free
Prompt Price
Free
Output Price
2/16
Feature Support

Model Overview

DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows. It succeeds the [DeepSeek V3-0324](/deepseek/deepseek-chat-v3-0324) model and performs well on a variety of tasks.

Basic Information

Developer
deepseek
Model Series
DeepSeek
Release Date
2025-08-21
Context Length
64,000 tokens
Variant
free

Pricing Information

This model is free to use

Data Policy

Supported Features

Supported (2)

Seed
Reasoning

Unsupported (14)

Image Input
Top K
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Web Search Options
Top A

Other Variants

Actual Usage Statistics

No recent usage data available.

Models by Same Author (deepseek)

DeepSeek V3.1 Base
163,840 tokens
$0.20 / $0.80
R1 Distill Qwen 7B
131,072 tokens
$0.00 / $0.00
Deepseek R1 0528 Qwen3 8B (free)
131,072 tokens
Free
Deepseek R1 0528 Qwen3 8B
131,072 tokens
$0.02 / $0.07
R1 0528 (free)
163,840 tokens
Free