LLM Model Ranking Real-time Usage-based Model Rankings and Statistics
Compare and analyze actual usage and performance of LLM models from various perspectives
Daily By Model High Reasoning Models (excluding free models) TOP 10
Thinking Ratio Indicator Guide
The thinking ratio is calculated based on the reasoning tokens / input tokens ratio. The higher this ratio, the more internal reasoning processes the model undergoes.
This metric indicates how deeply the model thinks before generating a response. Models with a higher thinking ratio are likely to produce more sophisticated results in tasks such as complex problem solving, logical reasoning, and multi-step planning. However, a high thinking ratio does not necessarily mean better performance. In some tasks, excessive internal reasoning may incur unnecessary computational costs or be inefficient in situations where concise responses are needed. Therefore, this metric should be interpreted according to the characteristics and purpose of the task.
Rank | Model Name | Input Tokens | Reasoning Tokens | Thinking Ratio |
---|---|---|---|---|
1 | perplexity/sonar-deep-research | 1.02M | 56.25M | 55.2578 |
2 | thudm/glm-z1-32b-0414 | 34.13K | 58.40K | 1.7113 |
3 | thudm/glm-z1-rumination-32b-0414 | 21.52K | 33.83K | 1.5716 |
4 | deepseek/deepseek-r1-distill-llama-8b | 30.04M | 31.71M | 1.0557 |
5 | deepseek/deepseek-r1-distill-llama-70b | 174.99M | 181.37M | 1.0365 |
6 | openai/o1-mini-2024-09-12 | 138.01K | 104.90K | 0.7601 |
7 | openai/o1-mini | 3.53M | 2.67M | 0.7559 |
8 | deepseek/deepseek-r1-distill-qwen-1.5b | 2.57M | 1.92M | 0.7487 |
9 | qwen/qwq-32b | 79.23M | 44.66M | 0.5637 |
10 | openai/o1-preview | 1.08M | 530.61K | 0.4906 |