LongCat Flash Chat Check detailed information and pricing for AI models

Context Length 131,072 tokens, meituan from provided

131,072
Context Tokens
$0.15
Prompt Price
$0.75
Output Price
0/16
Feature Support

Model Overview

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.

Basic Information

Developer
meituan
Model Series
Other
Release Date
2025-09-09
Context Length
131,072 tokens
Max Completion Tokens
131,072 tokens
Variant
standard

Pricing Information

Prompt Tokens
$0.15 / 1M tokens
Completion Tokens
$0.75 / 1M tokens

Data Policy

Supported Features

Unsupported (16)

Image Input
Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Reasoning
Web Search Options
Top A

Actual Usage Statistics

No recent usage data available.

Similar Price Range Models

GLM 4.5 Air
z-ai
131,072 tokens
$0.14 / $0.86
GPT-4o-mini
openai
128,000 tokens
$0.15 / $0.60
GPT-4o-mini (2024-07-18)
openai
128,000 tokens
$0.15 / $0.60
Llama 4 Maverick
meta-llama
1,048,576 tokens
$0.15 / $0.60
GPT-4o-mini Search Preview
openai
128,000 tokens
$0.15 / $0.60