← Back to Text Models LFM2.5-1.2B-Thinking is optimized for reasoning tasks, delivering strong performance on math, logic, and multi-step problem-solving. Built on the LFM2.5 architecture with specialized training for chain-of-thought reasoning.Documentation Index
Fetch the complete documentation index at: https://liquidai-link-snapshot-contract.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Specifications
| Property | Value |
|---|---|
| Parameters | 1.2B |
| Context Length | 32K tokens |
| Architecture | LFM2.5 (Dense) |
Math & Logic
Strong arithmetic and logical reasoning
Chain-of-Thought
Step-by-step problem decomposition
Fine-tunable
TRL compatible (SFT, DPO, GRPO)
Quick Start
- Transformers
- llama.cpp
- vLLM
- SGLang