# Liquid Docs > Documentation for Liquid AI Foundation Models (LFM) ## Docs - [Baseten](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/baseten.md): Baseten is an AI infrastructure platform for deploying and serving ML models with optimized inference, autoscaling, and multi-cloud support. - [Fal](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/fal.md): Fal is a serverless generative media platform offering lightning-fast inference for AI models for image, video, and audio generation. - [Modal](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/modal.md): Modal is a serverless cloud platform for running AI/ML workloads with instant autoscaling on GPUs and CPUs. - [SGLang](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/sglang.md): SGLang is a fast serving framework for large language models. It features RadixAttention for efficient prefix caching, optimized CUDA kernels, and continuous batching for high-throughput, low-latency inference. - [Transformers](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/transformers.md): Transformers is a library for inference and training of pretrained models. - [vLLM](https://liquidai-link-snapshot-contract.mintlify.app/deployment/gpu-inference/vllm.md): vLLM is a high-throughput and memory-efficient inference engine for LLMs. It supports efficient serving with PagedAttention, continuous batching, and optimized CUDA kernels. - [Changelog](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/leap-sdk-changelog.md): Release notes for the LEAP SDK, including the 0.9.x → 0.10.x Kotlin Multiplatform transition. - [llama.cpp](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/llama-cpp.md): llama.cpp is a C++ library for efficient LLM inference with minimal dependencies. It's designed for CPU-first inference with cross-platform support. - [LM Studio](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/lm-studio.md): LM Studio is a desktop application for running LLMs locally with a graphical interface. - [MLX](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/mlx.md): MLX is Apple's machine learning framework optimized for Apple Silicon. It provides efficient inference on Mac devices with M-series chips (M1, M2, M3, M4) using Metal acceleration for GPU computing. - [Ollama](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/ollama.md): Ollama is a command-line tool for running LLMs locally with a simple interface. It provides easy model management and serving with an OpenAI-compatible API. - [ONNX](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/onnx.md): ONNX provides a platform-agnostic inference specification that allows running the model on device-specific runtimes that include CPU, GPU, NPU, and WebGPU. - [Advanced Features](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/advanced-features.md): GenerationOptions, JSONSchemaGenerator, function-calling type references — same surface everywhere. - [AI Agent Usage Guide](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/ai-agent-usage-guide.md): End-to-end recipes for building AI agents with the LEAP SDK — same patterns across iOS, macOS, Android, JVM, and native. - [Cloud AI Comparison](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/cloud-ai-comparison.md): Mapping LEAP SDK concepts to cloud chat-completion APIs like OpenAI. - [Constrained Generation](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/constrained-generation.md): Generate structured JSON output with compile-time validation — same approach on every platform. - [Conversation & Generation](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/conversation-generation.md): Reference for ModelRunner, Conversation, MessageResponse, and GenerationOptions — same API on every platform. - [Desktop & Native Platforms](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/desktop-platforms.md): Run the LEAP SDK on JVM desktop, native Linux, native Windows, and macOS — same API as Android and iOS. - [Function Calling](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/function-calling.md): Tool use with LeapFunction — same API on every platform, with Hermes and Pythonic parsers. - [Messages & Content](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/messages-content.md): ChatMessage, ChatMessageContent, audio format requirements — same shape on every platform. - [Model Loading](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/model-loading.md): Reference for ModelDownloader, LeapDownloader, loadModel, loadSimpleModel, and KV cache reuse. - [OpenAI-Compatible Client](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/openai-client.md): Lightweight client for OpenAI-compatible chat completions APIs — ideal for hybrid on-device + cloud routing. - [Quick Start](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/quick-start.md): Install the LEAP SDK on iOS, macOS, Android, JVM, Linux, or Windows — same API everywhere. - [Utilities](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/utilities.md): Error handling, serialization, Android-specific downloader internals, and a putting-it-together example. - [Voice Assistant Widget](https://liquidai-link-snapshot-contract.mintlify.app/deployment/on-device/sdk/voice-assistant.md): Drop-in Compose Multiplatform voice UI — runs on iOS, macOS, Android, and JVM Desktop. - [Authentication](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/authentication.md): Authentication commands for the LEAP Bundle CLI - [Bundle Creation](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/bundle-creation.md): Commands for creating and validating bundle requests in the LEAP Bundle CLI - [Bundle Management](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/bundle-management.md): Commands for listing and canceling bundle requests in the LEAP Bundle CLI - [Changelog](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/changelog.md) - [Configuration](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/configuration.md): Configuration commands and file format for the LEAP Bundle CLI - [Data Privacy](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/data-privacy.md): This page outlines how the LEAP Model Bundling Service handles your data, what information we collect and delete. - [Download](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/download.md): Download commands for bundle requests and LEAP models in the LEAP Bundle CLI - [Quick Start](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/quick-start.md): The Bundling Service helps users create and manage model bundles for Liquid Edge AI Platform (LEAP). Currently users interact with it through a command-line interface (CLI). - [Reference](https://liquidai-link-snapshot-contract.mintlify.app/deployment/tools/model-bundling/reference.md): Reference information for the LEAP Bundle CLI including limitations, error handling, and exit codes - [Build AI Agents with Koog Framework on Android](https://liquidai-link-snapshot-contract.mintlify.app/examples/android/leap-koog-agent.md) - [Generate Structured Recipes with Constrained Output](https://liquidai-link-snapshot-contract.mintlify.app/examples/android/recipe-generator-constrained-output.md) - [Product Slogan Generator with LeapSDK](https://liquidai-link-snapshot-contract.mintlify.app/examples/android/slogan-generator.md) - [Image Understanding with Vision Language Models](https://liquidai-link-snapshot-contract.mintlify.app/examples/android/vision-language-model-example.md) - [Web Content Summarizer for Android](https://liquidai-link-snapshot-contract.mintlify.app/examples/android/web-content-summarizer.md) - [Connect AI Tools](https://liquidai-link-snapshot-contract.mintlify.app/examples/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Fine tuning LFM2-VL to identify car makers from images](https://liquidai-link-snapshot-contract.mintlify.app/examples/customize-models/car-maker-identification.md) - [Fine-tuning LFM for a local Home Assistant](https://liquidai-link-snapshot-contract.mintlify.app/examples/customize-models/home-assistant.md) - [Fine-Tune a Vision-Language Model on Satellite Imagery](https://liquidai-link-snapshot-contract.mintlify.app/examples/customize-models/satellite-vlm.md) - [Build a Wildfire Prevention System with a Compact VLM](https://liquidai-link-snapshot-contract.mintlify.app/examples/customize-models/wildfire-prevention.md) - [Examples Library](https://liquidai-link-snapshot-contract.mintlify.app/examples/index.md) - [Audio car cockpit demo](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/audio-car-cockpit.md) - [Audio transcription in real-time](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/audio-to-text-in-real-time.md) - [Browser control with GRPO reinforcement learning](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/browser-control.md) - [Flight search assistant with tool calling](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/flight-search-assistant.md) - [Invoice extractor tool](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/invoice-extractor-tool-with-liquid-nanos.md) - [Bidirectional English to Korean translation CLI](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/lfm2-english-to-korean.md) - [Meeting summarization CLI](https://liquidai-link-snapshot-contract.mintlify.app/examples/laptop-examples/meeting-summarization.md) - [LFM2.5-Audio browser demo with WebGPU](https://liquidai-link-snapshot-contract.mintlify.app/examples/web/audio-webgpu-demo.md) - [Hand & Voice Racer](https://liquidai-link-snapshot-contract.mintlify.app/examples/web/hand-voice-racer.md) - [Real-time video captioning with LFM2.5-VL-1.6B and WebGPU](https://liquidai-link-snapshot-contract.mintlify.app/examples/web/vl-webgpu-demo.md) - [Datasets](https://liquidai-link-snapshot-contract.mintlify.app/lfm/fine-tuning/datasets.md): Dataset formats for SFT, DPO, and VLM fine-tuning - [TRL](https://liquidai-link-snapshot-contract.mintlify.app/lfm/fine-tuning/trl.md): TRL (Transformer Reinforcement Learning) is a library for fine-tuning and aligning language models using methods like Supervised Fine-Tuning (SFT), Reward Modeling, and Direct Preference Optimization (DPO). - [Unsloth](https://liquidai-link-snapshot-contract.mintlify.app/lfm/fine-tuning/unsloth.md): Unsloth makes fine-tuning LLMs 2-5x faster with 70% less memory through optimized kernels and efficient memory management. - [Connect AI Tools](https://liquidai-link-snapshot-contract.mintlify.app/lfm/help/connect-ai-tools.md): Connect your AI coding tools to Liquid Docs via MCP for live, queryable access to documentation - [Contributing to Docs](https://liquidai-link-snapshot-contract.mintlify.app/lfm/help/contributing.md): Guidelines for contributing to Liquid AI documentation. - [FAQs](https://liquidai-link-snapshot-contract.mintlify.app/lfm/help/faqs.md): Frequently asked questions about LFM models and deployment. - [Model License](https://liquidai-link-snapshot-contract.mintlify.app/lfm/help/model-license.md): Understand how you can use, modify, and distribute Liquid Foundation Models under the LFM Open License v1.0. - [Troubleshooting](https://liquidai-link-snapshot-contract.mintlify.app/lfm/help/troubleshooting.md): Common issues and solutions when working with LFM models. - [Chat Template](https://liquidai-link-snapshot-contract.mintlify.app/lfm/key-concepts/chat-template.md): The chat template defines how conversations are structured using special tokens and roles. LFM2 uses a ChatML-like chat template to structure conversations as follows: - [Prompting Guide](https://liquidai-link-snapshot-contract.mintlify.app/lfm/key-concepts/text-generation-and-prompting.md): This guide covers how to effectively use system prompts, user prompts, and assistant prompts with LFM2 models, along with an overview of sampling parameters and special prompting recipes for specific models. - [Tool Use](https://liquidai-link-snapshot-contract.mintlify.app/lfm/key-concepts/tool-use.md): LFM2.5 and LFM2 models support tool use (function calling), enabling models to interact with APIs, databases, and external services to provide accurate, up-to-date information. - [Audio Models](https://liquidai-link-snapshot-contract.mintlify.app/lfm/models/audio-models.md): Liquid's LFM audio models are among the smallest fully interleaved audio/text-in, audio/text-out models with a complete reasoning backbone — eliminating the need to combine separate TTS/ASR encoders with a standalone language model. - [Liquid Foundation Models](https://liquidai-link-snapshot-contract.mintlify.app/lfm/models/complete-library.md): Liquid Foundation Models (LFMs) are a new class of multimodal architectures built for fast inference and on-device deployment. Browse all available models and formats here. - [Liquid Nanos](https://liquidai-link-snapshot-contract.mintlify.app/lfm/models/liquid-nanos.md): A library of low-latency, task-specific models fine-tuned on Liquid's multimodal LFM base models. Nanos deliver high accuracy on narrow tasks while remaining small enough to deploy on-device or serve economically at high volume. - [Text Models](https://liquidai-link-snapshot-contract.mintlify.app/lfm/models/text-models.md): Liquid's LFM text models range from 350M to 8B parameters, delivering ultra-low-latency generation while matching the performance of much larger models. They come in both dense and MoE variants to deploy flexibly across different devices. - [Vision Models](https://liquidai-link-snapshot-contract.mintlify.app/lfm/models/vision-models.md): Liquid's LFM vision models pair our lightweight LFM text backbones with dynamic SigLIP2 image encoders, delivering fast multimodal inference on-device while matching larger VLMs in quality.