Skip to main content

Documentation Index

Fetch the complete documentation index at: https://liquidai-link-snapshot-contract.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

← Back to Text Models LFM2-8B-A1B is Liquid AI’s Mixture-of-Experts model, combining 8B total parameters with only 1.5B active parameters per forward pass. This delivers the quality of larger models with the speed and efficiency of smaller ones—ideal for on-device deployment.

Specifications

PropertyValue
Parameters8B (1.5B active)
Context Length32K tokens
ArchitectureLFM2 (MoE)

MoE Efficiency

8B quality, 1.5B inference cost

On-Device

Runs on phones and laptops

Tool Calling

Native function calling support

Quick Start