Interfaze

logo

Beta

pricing

docs

blog

sign in

MiMo V2.5 Pro

MiMo V2.5 Pro by XiaomiMiMo, a text-generation model. Understand and compare features, benchmarks, and capabilities.

Comparison

FeatureMiMo V2.5 ProInterfaze
Input Modalities

text

image, text, audio, video, document

Native OCRNoYes
Long Document ProcessingYesYes
Language Support

3 partial

162+

Native Speech-to-TextNoYes
Native Object DetectionNoYes
Guardrail ControlsNoYes
Context Input Size

1M

1M

Tool CallingYes

Tool calling supported + built in browser, code execution and web search

Scaling

FeatureMiMo V2.5 ProInterfaze
Scaling

Self-hosted/Provider-hosted with quantization

Unlimited

View model card on Hugging Face

MiMo-V2.5-Pro is an open-source Mixture-of-Experts (MoE) language model with 1.02T total parameters and 42B active parameters. It utilizes the hybrid attention architecture and 3-layers Multi-Token Prediction (MTP) introduced in MiMo-V2-Flash, with up to 1M tokens context length.

1. Introduction

MiMo-V2.5-Pro is our most capable model to date, designed for the most demanding agentic, complex software engineering, and long-horizon tasks. It sustains complex trajectories spanning thousands of tool calls with strong instruction following and coherence over a 1M-token context window. Key features include:

  • Hybrid Attention Architecture: Interleaves Sliding Window Attention (SWA) and Global Attention (GA) with a 6:1 ratio and 128 sliding window. This reduces KV-cache storage by nearly 7x while maintaining long-context performance via learnable attention sink bias.
  • Multi-Token Prediction (MTP): Equipped with three lightweight MTP modules using dense FFNs. This triples output speed during inference and will be good to accelerate rollout in RL training.
  • Efficient Pre-Training: Trained on 27T tokens using FP8 mixed precision and native 32k seq length. The context window supports up to 1M tokens.
  • Agentic Capabilities: Post-training utilizes SFT, large-scale agentic RL and Multi-Teacher On-Policy Distillation (MOPD), achieving superior performance on the most demanding agentic, complex software engineering, and long-horizon tasks.

2. Model Downloads

ModelTotal ParamsActive ParamsContext LengthPrecisionDownload
MiMo-V2.5-Pro1.02T42B1MFP8 (E4M3) Mixed🤗 HuggingFace
MiMo-V2.5-Pro-Base1.02T42B256KFP8 (E4M3) Mixed🤗 HuggingFace

3. Evaluation Results

Base Model Evaluation

CategoryBenchmarkSettingMiMo-V2.5-Pro BaseMiMo-V2.5 BaseDeepSeek-V4-Pro BaseDeepSeek-V4-Flash BaseKimi-K2 Base
Params#Activated / #Total-42B / 1.02T15B / 310B49B / 1.6T13B / 284B32B / 1.04T
GeneralBBH3-shot88.487.287.586.988.7
MMLU5-shot89.486.390.188.787.8
MMLU-Redux5-shot92.889.890.889.490.2
MMLU-Pro5-shot68.565.873.568.369.2
DROP3-shot86.383.788.788.683.6
ARC-Challenge25-shot97.296.5--96.2
HellaSwag10-shot89.888.688.085.794.6
WinoGrande5-shot85.684.781.579.585.3
TriviaQA5-shot81.380.785.682.885.1
GPQA-Diamond5-shot66.758.1--48.1
MathGSM8K8-shot99.683.392.690.892.1
MATH4-shot86.267.764.557.470.2
AIME 24&252-shot37.336.9--31.6
CodeHumanEval+1-shot75.671.3--84.8
MBPP+3-shot74.170.9--73.8
LiveCodeBench v61-shot39.635.5--26.3
SWE-Bench (AgentLess)3-shot35.730.8--28.2
ChineseC-Eval5-shot91.588.693.192.192.5
CMMLU5-shot90.288.290.890.490.9
MultilingualGlobalMMLU5-shot83.677.4--80.7

Long-context Evaluation

GraphWalks is a long-context benchmark from OpenAI that fills the prompt with a directed graph of hex-hash nodes and asks the model to run a breadth-first search (nodes exactly at depth N) or list a node's parents. We evaluate across the full 32k–1M input-token span and apply the same evaluation fixes described by Anthropic.

MiMo V2.5 Pro delivers a major leap in long-context reasoning. Past 128k, V2 Pro degrades rapidly and collapses to 0.00 at 1M on both subtasks, while V2.5 Pro still scores 0.56 BFS / 0.92 Parents at 512k and 0.37 / 0.62 at 1M.

4. Model Architecture & Training Process

MiMo-V2.5-Pro addresses the quadratic complexity of long contexts by interleaving Local Sliding Window Attention (SWA) and Global Attention (GA). Unlike traditional speculative decoding, our MTP module is natively integrated for training and inference.

Model Summary

ComponentMiMo-V2.5-ProMiMo-V2.5
Total Parameters1.02T310B
Activated Parameters42B15B
Hidden Size61444096
Num Layers70 (1 dense + 69 MoE)48 (1 dense + 47 MoE)
Full Attention Layers109
SWA Layers6039
Num Attention Heads12864
Num KV Heads8 (GQA)8 (GA) / 4 (SWA)
Head Dim (QK / V)192 / 128192 / 128
Routed Experts384256
Experts per Token88
MoE Intermediate Size20482048
Dense Intermediate Size16384 (layer 0 only)16384 (layer 0 only)
SWA Window Size128128
Max Context Length1M1M
MTP Layers33

Training Process

For post-training, MiMo-V2.5-Pro adopts the three-stage post-training paradigm introduced in MiMo-V2-Flash to achieve exceptional performance. The paradigm begins with Supervised Fine-Tuning (SFT) to build strong, foundational instruction-following skills using curated data pairs. Next, in the Domain-Specialized Training stage, diverse teacher models — ranging from math and safety to complex agentic tool-use — are individually optimized using domain-specific RL rewards. Finally, the process culminates in Multi-Teacher On-Policy Distillation (MOPD). Through dynamic on-policy RL, the single student model iteratively learns from its own outputs, continuously receiving precise token-level guidance from the expert teachers to seamlessly integrate broad capabilities.

5. Deployment

Since inference engines are continuously being updated and optimized, this guide only provides deployment examples for reference. For the best performance, we strongly recommend following our referenced approach to get the latest best practices and optimal performance.

SGLang Deployment

For the best performance, we strongly recommend deploying using this approach, which is officially supported by the SGLang community. Please refer to SGLang MiMo-V2.5-Pro Cookbook for the latest deployment guide.

The following is an example of running the model with SGLang, referenced from sgl-project/sglang#23808:

SGLANG_ENABLE_SPEC_V2=1
SGLANG_DEEPEP_NUM_MAX_DISPATCH_TOKENS_PER_RANK=256
python3 -m sglang.launch_server \
              --model-path XiaomiMiMo/MiMo-V2.5-Pro \
              --trust-remote-code \
              --pp-size 1 \
              --dp-size 2 \
              --ep-size 16 \
              --tp-size 16 \
              --moe-dense-tp-size 1 \
              --enable-dp-attention \
              --moe-a2a-backend deepep \
              --dist-init-addr ${LWS_LEADER_IP}:20000 \
              --node-rank ${LWS_WORKER_INDEX} \
              --nnodes ${LWS_GROUP_SIZE} \
              --page-size 64 \
              --attention-backend fa3 \
              --quantization fp8 \
              --mem-fraction-static 0.7 \
              --max-running-requests 128 \
              --cuda-graph-max-bs 64 \
              --chunked-prefill-size 32768 \
              --context-length 1048576 \
              --tokenizer-worker-num 64 \
              --speculative-algorithm EAGLE \
              --speculative-num-steps 3 \
              --speculative-eagle-topk 1 \
              --speculative-num-draft-tokens 4 \
              --enable-multi-layer-eagle \
              --host 0.0.0.0 \
              --port 9001 \
              --reasoning-parser mimo \
              --tool-call-parser mimo \
              --watchdog-timeout 3600 \
              --model-loader-extra-config '{"enable_multithread_load": "true","num_threads": 64}'

vLLM Deployment

For the best performance, we strongly recommend deploying using this approach, which is officially supported by the vLLM community. Please refer to vLLM MiMo-V2.5-Pro Cookbook for the latest deployment guide.

For local deployment, we recommend setting the sampling parameters to temperature=1.0, top_p=0.95.

Citation

@misc{mimo2026v25pro,
  title={MiMo-V2.5-Pro},
  author={{Xiaomi MiMo Team}},
  year={2026},
  howpublished={\url{https://huggingface.co/collections/XiaomiMiMo/mimo-v25}},
}

Contact

For questions or feedback, reach us at mimo@xiaomi.com or join our community:

Want more deterministic results?