← Back to Vision Models LFM2-VL-450M is Liquid AI’s smallest vision-language model, designed for edge deployment with strict memory and compute constraints. Delivers fast multimodal inference on resource-limited devices.Documentation Index
Fetch the complete documentation index at: https://liquidai-link-snapshot-contract.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Specifications
| Property | Value |
|---|---|
| Parameters | 450M |
| Context Length | 32K tokens |
| Architecture | LFM2-VL (Dense) |
Ultra-Light
Minimal memory footprint
Low Latency
Fastest vision model inference
Edge-Ready
Runs on mobile and embedded devices
Quick Start
- Transformers
- vLLM
- SGLang
- llama.cpp