Interfaze

logo

Beta

pricing

docs

blog

sign in

LFM2.5 VL 450M

LFM2.5 VL 450M by LiquidAI, a image-text-to-text model with object detection, multimodal capabilities. Understand and compare object detection, multimodal features, benchmarks, and capabilities.

Comparison

FeatureLFM2.5 VL 450MInterfaze
Input Modalities

text, image

image, text, audio, video, document

Native OCRNoYes
Long Document ProcessingNoYes
Language Support

9 partial

162+

Native Speech-to-TextNoYes
Native Object DetectionYesYes
Guardrail ControlsNoYes
Context Input Size

32.8K

1M

Tool CallingYes

Tool calling supported + built in browser, code execution and web search

Object Detection Capabilities

FeatureLFM2.5 VL 450MInterfaze
Object Bounding BoxesPartialYes
Object Segmentation MasksNoYes
Confidence ScoresNoYes
Dense Image ProcessingNoYes
Low Quality ImagesNoYes
Industry-SpecificNoYes
GUI Element DetectionNoYes

Scaling

FeatureLFM2.5 VL 450MInterfaze
Scaling

Self-hosted/Provider-hosted with quantization

Unlimited

View model card on Hugging Face

LFM2.5‑VL-450M is Liquid AI's refreshed version of the first vision-language model, LFM2-VL-450M, built on an updated backbone LFM2.5-350M and tuned for stronger real-world performance. Find more about LFM2.5 family of models in our blog post.

  • Enhanced instruction following on vision and language tasks.
  • Improved multilingual vision understanding in Arabic, Chinese, French, German, Japanese, Korean, Portuguese and Spanish.
  • Bounding box prediction and object detection for grounded visual understanding.
  • Function calling support for text-only input.

🎥⚡️ You can try LFM2.5-VL-450M running locally in your browser with our real-time video stream captioning WebGPU demo 🎥⚡️

Alternatively, try the API model on the Playground.

📄 Model details

LFM2.5-VL-450M is a general-purpose vision-language model with the following features:

  • LM Backbone: LFM2.5-350M
  • Vision encoder: SigLIP2 NaFlex shape‑optimized 86M
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish
  • Native resolution processing: handles images up to 512*512 pixels without upscaling and preserves non-standard aspect ratios without distortion
  • Tiling strategy: splits large images into non-overlapping 512×512 patches and includes thumbnail encoding for global context
  • Inference-time flexibility: user-tunable maximum image tokens and tile count for speed/quality tradeoff without retraining
  • Generation parameters:
    • text: temperature=0.1, min_p=0.15, repetition_penalty=1.05
    • vision: min_image_tokens=32 max_image_tokens=256, do_image_splitting=True
ModelDescription
LFM2.5-VL-450MOriginal model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2.5-VL-450M-GGUFQuantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2.5-VL-450M-ONNXONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).
LFM2.5-VL-450M-MLX-8bitMLX format for Apple Silicon. Optimized for fast on-device inference on Mac with mlx-vlm. Also available in 4bit, 5bit, 6bit, and bf16.

We recommend using it for general vision-language workloads, captioning and object detection. It’s not well-suited for knowledge-intensive tasks or fine-grained OCR.

Chat Template

LFM2.5-VL uses a ChatML-like format. See the Chat Template documentation for details.

<|startoftext|><|im_start|>system You are a helpful multimodal assistant by Liquid AI.<|im_end|> <|im_start|>user <image>Describe this image.<|im_end|> <|im_start|>assistant This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|>

You can use processor.apply_chat_template() to format your messages automatically.

🏃 Inference

You can run LFM2.5-VL-450M with Hugging Face transformers v5.1 or newer:

pip install transformers pillow
from transformers import AutoProcessor, AutoModelForImageTextToText
from transformers.image_utils import load_image


model_id = "LiquidAI/LFM2.5-VL-450M"
model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16"
)
processor = AutoProcessor.from_pretrained(model_id)


url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
image = load_image(url)
conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": "What is in this image?"},
        ],
    },
]


inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
    tokenize=True,
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(outputs, skip_special_tokens=True)[0]


Visual grounding

LFM2.5-VL-450M supports bounding box prediction:

url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
image = load_image(url)
query = "status"
prompt = f'Detect all instances of: {query}. Response must be a JSON array: [{"label": ..., "bbox": [x1, y1, x2, y2]}, ...]. Coordinates are normalized to [0,1].'

conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": prompt},
        ],
    },
]


inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
    tokenize=True,
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(outputs, skip_special_tokens=True)[0]


Tool Use

LFM2.5 supports function calling for text only input by applying the chat template with the tokenizer. See the Tool Use documentation for the full guide.

tools = [{
    "name": "get_weather",
    "description": "Get current weather for a location",
    "parameters": {
        "type": "object",
        "properties": {"location": {"type": "string"}},
        "required": ["location"]
    }
}]

messages = [{"role": "user", "content": "What's the weather in Paris?"}]


inputs = processor.tokenizer.apply_chat_template(
    messages,
    tools=tools,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
)
input_ids = inputs["input_ids"].to(model.device)
outputs = model.generate(input_ids, max_new_tokens=256)
response = processor.tokenizer.decode(outputs[0, input_ids.shape[1]:], skip_special_tokens=False)


NameDescriptionDocsNotebook
TransformersSimple inference with direct access to model internals.Link
vLLMHigh-throughput production deployments with GPU.Link
SGLangHigh-throughput production deployments with GPU.Link
llama.cppCross-platform inference with CPU offloading.Link

🔧 Fine-tuning

We recommend fine-tuning LFM2.5-VL-450M model on your use cases to maximize performance.

NotebookDescriptionLink
SFT (Unsloth)Supervised Fine-Tuning with LoRA using Unsloth.
SFT (TRL)Supervised Fine-Tuning with LoRA using TRL.

📊 Performance

LFM2.5-VL-450M improves over LFM2-VL-450M across both vision and language benchmarks, while also adding two new capabilities: bounding box prediction on RefCOCO-M and function calling support measured by BFCLv4.

Vision benchmarks

ModelMMStarRealWorldQAMMBench (dev en)MMMU (val)POPEMMVetBLINKInfoVQA (val)OCRBenchMM-IFEvalMMMBCountBenchRefCOCO-M
LFM2.5-VL-450M43.0058.4360.9132.6786.9341.1043.9243.0268445.0068.0973.3181.28
LFM2-VL-450M40.8752.0356.2734.4483.7933.8542.6144.5665733.0954.2947.64-
SmolVLM2-500M38.2049.9052.3234.1082.6729.9040.7024.6460911.2746.7961.81-

All vision benchmark scores are obtained using VLMEvalKit. Multilingual scores are based on the average of benchmarks translated by GPT-4.1-mini from English to Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish.

Language benchmarks

ModelGPQAMMLU ProIFEvalMulti-IFBFCLv4
LFM2.5-VL-450M25.6619.3261.1634.6321.08
LFM2-VL-450M23.1317.2251.7526.21-
SmolVLM2-500M23.8413.5730.146.82-

📬 Contact

Citation

@article{liquidai2025lfm2, title={LFM2 Technical Report}, author={Liquid AI}, journal={arXiv preprint arXiv:2511.23404}, year={2025} }

Want more deterministic results?