Hermes 2 Mixtral 8x7B DPO Check detailed information and pricing for AI models
Context Length 32,768 tokens, nousresearch from provided
32,768
Context Tokens
$0.60
Prompt Price
$0.60
Output Price
7/16
Feature Support
Model Overview
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b). The model was trained on over 1,000,000 entries of primarily [GPT-4](/models/openai/gpt-4) generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. #moe
Basic Information
Developer
nousresearch
Model Series
Mistral
Release Date
2024-01-16
Context Length
32,768 tokens
Max Completion Tokens
2,048 tokens
Variant
standard
Pricing Information
Prompt Tokens
$0.60 / 1M tokens
Completion Tokens
$0.60 / 1M tokens
Data Policy
Supported Features
Supported (7)
Top K
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Unsupported (9)
Image Input
Seed
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Reasoning
Web Search Options
Top A
Actual Usage Statistics
#281
Out of 345 total models
41.44M
Total Tokens Last 30 Days
1.38M
Daily Average Usage
21%
Weekly Usage Change