Weekly By Model Model Efficiency Ranking (excluding free models) TOP 10

Efficiency Ranking Indicator Guide

Efficiency rankings are calculated based on the output tokens / input tokens ratio. The lower this ratio, the more efficiently the model operates.

This metric holds particular significance in tasks such as document editing, code refactoring, and data analysis. Highly efficient models tend to accurately extract only the necessary parts from user-provided information and respond concisely, reducing unnecessary token consumption and enabling cost-effective AI utilization. However, a low efficiency ratio does not necessarily indicate better performance. Some complex tasks may require more output tokens, and when detailed explanations or extensive information provision is needed, a higher efficiency ratio might actually be preferable. Therefore, this metric should be interpreted according to the nature and purpose of the task.

Rank Model Name Input Tokens Output Tokens Efficiency Ratio
1 meta-llama/llama-guard-4-12b 498.90M 897.64K 0.0018
2 qwen/qwen3-coder-480b-a35b-07-25 148.2B 1.8B 0.012
3 anthropic/claude-4-sonnet-20250522 527.1B 10.7B 0.0204
4 anthropic/claude-4.1-opus-20250805 27.8B 596.22M 0.0214
5 mistralai/devstral-small-2507 793.20M 17.11M 0.0216
6 qwen/qwen-turbo-2024-11-01 1.1B 24.14M 0.0217
7 mistralai/devstral-medium-2507 242.31M 5.41M 0.0223
8 neversleep/llama-3.1-lumimaid-8b 823.74M 18.88M 0.0229
9 openai/gpt-3.5-turbo-16k 93.43M 2.27M 0.0243
10 mistralai/mistral-tiny 2.4B 61.29M 0.0251