← Back to Text Models LFM2-8B-A1B is Liquid AI’s Mixture-of-Experts model, combining 8B total parameters with only 1.5B active parameters per forward pass. This delivers the quality of larger models with the speed and efficiency of smaller ones—ideal for on-device deployment.Documentation Index
Fetch the complete documentation index at: https://liquidai-link-snapshot-contract.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Specifications
| Property | Value |
|---|---|
| Parameters | 8B (1.5B active) |
| Context Length | 32K tokens |
| Architecture | LFM2 (MoE) |
MoE Efficiency
8B quality, 1.5B inference cost
On-Device
Runs on phones and laptops
Tool Calling
Native function calling support
Quick Start
- Transformers
- llama.cpp
- vLLM
- SGLang