Monthly By Model Model Efficiency Ranking (excluding free models) TOP 10

Efficiency Ranking Indicator Guide

Efficiency rankings are calculated based on the output tokens / input tokens ratio. The lower this ratio, the more efficiently the model operates.

This metric holds particular significance in tasks such as document editing, code refactoring, and data analysis. Highly efficient models tend to accurately extract only the necessary parts from user-provided information and respond concisely, reducing unnecessary token consumption and enabling cost-effective AI utilization. However, a low efficiency ratio does not necessarily indicate better performance. Some complex tasks may require more output tokens, and when detailed explanations or extensive information provision is needed, a higher efficiency ratio might actually be preferable. Therefore, this metric should be interpreted according to the nature and purpose of the task.

Rank Model Name Input Tokens Output Tokens Efficiency Ratio
1 meta-llama/llama-guard-4-12b 820.24M 2.25M 0.0027
2 meta-llama/llama-3.1-405b 422.04M 3.10M 0.0074
3 google/gemini-2.5-pro-exp-03-25 39.2B 355.49M 0.009
4 qwen/qwen-vl-plus 927.33M 13.95M 0.015
5 qwen/qwen-vl-max-2025-01-25 329.78M 5.07M 0.0154
6 neversleep/llama-3-lumimaid-8b 2.9B 49.79M 0.0174
7 openai/codex-mini 1.8B 34.41M 0.0175
8 nothingiisreal/mn-celeste-12b 881.41M 17.83M 0.0202
9 openai/gpt-4o-mini 1,677.5B 34.2B 0.0204
10 mistralai/devstral-small-2505 5.6B 115.06M 0.0205