Interfaze

logo

Beta

pricing

docs

blog

sign in

Nandi Mini 600M Early Checkpoint

Nandi Mini 600M Early Checkpoint by FrontiersMind, a text-generation model. Understand and compare features, benchmarks, and capabilities.

Comparison

FeatureNandi Mini 600M Early CheckpointInterfaze
Input Modalities

text

image, text, audio, video, document

Native OCRNoYes
Long Document ProcessingNoYes
Language Support

11 partial

162+

Native Speech-to-TextNoYes
Native Object DetectionNoYes
Guardrail ControlsNoYes
Context Input Size

unknown

1M

Tool CallingNo

Tool calling supported + built in browser, code execution and web search

Scaling

FeatureNandi Mini 600M Early CheckpointInterfaze
Scaling

Self-hosted/Provider-hosted with quantization

Unlimited

View model card on Hugging Face

Introduction

Nandi-Mini-600M-Early-Checkpoint is an early-stage checkpoint (After 250 Billions tokens) from the upcoming Nandi-Mini-600M model family, this is not the final model, a compact multilingual language model focused on strong efficiency, deployment flexibility, and Indic language support.

The model is being trained completely from scratch and is designed to deliver strong performance at low compute and memory budgets. This checkpoint is shared to provide an early look into the model’s scaling behavior and training progress.

This release is an early checkpoint and not the final converged model. Performance is expected to improve further with continued training and scaling.

📢 We will soon share technical blog ! Stay tuned!


Architectural Highlights

Nandi-Mini-600M introduces several efficiency-focused architectural optimizations designed for compact yet capable language models.

Shared KV (Shared Key-Value Vectors)

Shared KV is one of the core architectural ideas explored in Nandi-Mini. Instead of computing separate Key and Value projections, both reuse a shared latent representation, while a lightweight Key normalization step is applied specifically for attention computation.

This design reduces KV-cache memory usage by ~50% during inference with only a small increase in compute overhead, since RoPE and Key normalization are applied dynamically during attention computation.

Nandi supports two KV cache modes:

"kv_cache_mode": "shared"

Uses Shared KV, reducing KV-cache memory by ~50% with slightly higher compute overhead.

"kv_cache_mode": "vanilla"

Uses standard separate Key-Value caching for maximum inference compatibility and lower compute overhead.

KV-Cache Memory Comparison

  • Vanilla KV → Standard KV-cache memory usage
  • Shared KV → ~50% lower KV-cache footprint

Shared KV is part of our broader focus on deployable foundation models optimized for:

  • On-premise AI systems
  • Memory-constrained deployments
  • Edge devices
  • Long-context inference workloads

This remains an active research area within the Nandi model family, and we plan to share deeper technical details in upcoming engineering blogs.


Model Details

  • Type: Causal Language Model
  • Training Stage: Early Pretraining Checkpoint (250 Billions tokens)
  • Parameters: ~600M
  • Architecture: Transformer decoder
  • Positional Encoding: RoPE
  • Normalization: RMSNorm + QK Norm
  • Activation: SwiGLU
  • Attention: GQA + Shared KV
  • Embeddings: Tied embeddings with factorized design
  • Context length: 2,048 tokens (planned to be extended to 32,000 tokens)
  • Vocabulary Size: 131,072

This is not the final model, this is an early checkpoint. So the results are not final. Only 20% training is done.

General Benchmarks

ModelTrained TokensHellaSwagWinoGrandeOBQAPIQAGPQAARC-eARC-cMMLUAverage
MobiLlama-0.5B-Base1.339.6553.6730.6070.3524.3352.8223.6324.1839.90
Qwen-2-0.5B-Base1249.0157.6933.2068.9827.2354.7925.4244.0645.05
Qwen2.5-0.5B-Base1852.1656.8235.4070.2924.1064.6429.8647.4147.59
Qwen3-0.6B-Base3653.7759.1934.4070.2930.8065.4433.7850.3449.75
Qwen3.5-0.8B-Base3654.8760.5435.8070.0231.2570.5038.2352.7351.74
SmolLM-360M-Base0.653.3357.2237.6070.5621.2070.2433.2724.9246.04
SmolLM2-360M-Base456.3059.1937.6071.8125.2267.8836.6825.5547.53
Nandi-Mini-600M-Early-Checkpoint-Base0.244.8654.7734.8068.6026.3364.7329.7029.0144.10

Tokenization Fertility Score Across Languages

LanguageSmolLM3-3BQwen3-0.6B-BaseSarvam-1Nandi-Mini-600M
English1.171.161.321.18
Bengali8.667.511.551.44
Gujarati10.479.371.551.53
Hindi2.715.141.251.32
Kannada16.4312.962.101.90
Malayalam17.7714.562.492.05
Marathi3.736.701.551.55
Oriya19.0715.752.182.68
Punjabi9.238.661.471.42
Tamil13.5610.932.062.05
Telugu15.4013.382.091.77
Assamese9.268.134.311.51

🌍 Supported Languages

The model is trained on English and a diverse set of Indic languages, including:

Hindi, Bengali, Tamil, Telugu, Marathi, Gujarati, Kannada, Malayalam, Punjabi, Odia

!pip install transformers=='5.4.0'

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "FrontiersMind/Nandi-Mini-600M-Early-Checkpoint"

tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True
)

device = "cuda" if torch.cuda.is_available() else "cpu"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True,
    dtype=torch.bfloat16
).to(device).eval()


#model.config.kv_cache_mode = "shared" # Use this one if wants to save 50% KV cache, but this will slight more compute
model.config.kv_cache_mode = "vanilla"

prompt = """The night was quiet and the streets were empty"""

model_inputs = tokenizer(
    [prompt],
    return_tensors="pt"
).to(model.device)

outputs = model.generate(
        **model_inputs,
        max_new_tokens=50,
        do_sample=True,
        temperature=0.3,
        top_k=20,
        top_p=0.95,
        repetition_penalty=1.1,
        pad_token_id=tokenizer.eos_token_id,
        use_cache=True,   # Disable KV cache
    )

response = tokenizer.decode(
    outputs[0],
    skip_special_tokens=True
)

print(response)

Want more deterministic results?