Sarvam-M Check detailed information and pricing for AI models

Context Length 32,768 tokens, sarvamai from provided

32,768
Context Tokens
$0.25
Prompt Price
$0.75
Output Price
4/16
Feature Support

Model Overview

Sarvam-M is a 24 B-parameter, instruction-tuned derivative of Mistral-Small-3.1-24B-Base-2503, post-trained on English plus eleven major Indic languages (bn, hi, kn, gu, mr, ml, or, pa, ta, te). The model introduces a dual-mode interface: “non-think” for low-latency chat and a optional “think” phase that exposes chain-of-thought tokens for more demanding reasoning, math, and coding tasks. Benchmark reports show solid gains versus similarly sized open models on Indic-language QA, GSM-8K math, and SWE-Bench coding, making Sarvam-M a practical general-purpose choice for multilingual conversational agents as well as analytical workloads that mix English, native Indic scripts, or romanized text.

Basic Information

Developer
sarvamai
Model Series
Other
Release Date
2025-05-25
Context Length
32,768 tokens
Max Completion Tokens
32,768 tokens
Variant
standard

Pricing Information

Prompt Tokens
$0.25 / 1M tokens
Completion Tokens
$0.75 / 1M tokens

Supported Features

Supported (4)

Top K
Frequency Penalty
Presence Penalty
Repetition Penalty

Unsupported (12)

Image Input
Seed
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Reasoning
Web Search Options
Top A

Other Variants

Actual Usage Statistics

#315
Out of 346 total models
13.72M
Total Tokens Last 30 Days
980.32K
Daily Average Usage
83%
Weekly Usage Change

Usage Trend for the Last 30 Days

Similar Price Range Models

Qwen2.5 VL 72B Instruct
qwen
32,000 tokens
$0.25 / $0.75
Qwen VL Plus
qwen
7,500 tokens
$0.21 / $0.63
DeepSeek V3 0324
deepseek
163,840 tokens
$0.30 / $0.88
Codestral 2501
mistralai
262,144 tokens
$0.30 / $0.90
Mistral Small
mistralai
32,768 tokens
$0.20 / $0.60