OpenHands LM 32B V0.1 Check detailed information and pricing for AI models

Context Length 131,072 tokens, all-hands from provided

131,072
Context Tokens
$0.00
Prompt Price
$0.00
Output Price
0/16
Feature Support

Model Overview

OpenHands LM v0.1 is a 32B open-source coding model fine-tuned from Qwen2.5-Coder-32B-Instruct using reinforcement learning techniques outlined in SWE-Gym. It is optimized for autonomous software development agents and achieves strong performance on SWE-Bench Verified, with a 37.2% resolve rate. The model supports a 128K token context window, making it well-suited for long-horizon code reasoning and large codebase tasks. OpenHands LM is designed for local deployment and runs on consumer-grade GPUs such as a single 3090. It enables fully offline agent workflows without dependency on proprietary APIs. This release is intended as a research preview, and future updates aim to improve generalizability, reduce repetition, and offer smaller variants.

Basic Information

Developer
all-hands
Model Series
Other
Release Date
2025-04-02
Context Length
131,072 tokens
Variant
standard

Pricing Information

Prompt Tokens
$0.00 / 1M tokens
Completion Tokens
$0.00 / 1M tokens

Supported Features

Unsupported (16)

Image Input
Top K
Seed
Frequency Penalty
Presence Penalty
Repetition Penalty
Response Format
Min P
Logit Bias
Tool Usage
Logprobs
Top Logprobs
Structured Outputs
Reasoning
Web Search Options
Top A

Actual Usage Statistics

No recent usage data available.

Similar Price Range Models

Jamba 1.5 Large
ai21
256,000 tokens
$0.00 / $0.00
R1 Distill Qwen 7B
deepseek
131,072 tokens
$0.00 / $0.00
Deepseek R1 0528 Qwen3 8B (free)
deepseek
131,072 tokens
$0.00 / $0.00
Gemma 1 2B
google
8,192 tokens
$0.00 / $0.00
R1 0528 (free)
deepseek
163,840 tokens
$0.00 / $0.00