Interfaze

logo

Beta

pricing

docs

blog

sign in

Gemma 4 31B It DFlash

Gemma 4 31B It DFlash by z-lab, a text-generation model with multimodal capabilities. Understand and compare multimodal features, benchmarks, and capabilities.

Comparison

FeatureGemma 4 31B It DFlashInterfaze
Input Modalities

text, image, video

image, text, audio, video, document

Native OCRNoYes
Long Document ProcessingNoYes
Language Support

140 partial

162+

Native Speech-to-TextNoYes
Native Object DetectionNoYes
Guardrail ControlsYesYes
Context Input Size

256K

1M

Tool CallingYes

Tool calling supported + built in browser, code execution and web search

Scaling

FeatureGemma 4 31B It DFlashInterfaze
Scaling

Self-hosted/Provider-hosted with quantization

Unlimited

View model card on Hugging Face

Paper | GitHub | Blog

DFlash is a speculative decoding method that uses a lightweight block diffusion model to draft multiple tokens in parallel. This is the drafter model, which must be paired with google/gemma-4-31B-it.

Quick Start

Installation

vLLM: until Gemma4 DFlash support is merged, install vLLM from PR #41703:

uv pip install -U --torch-backend=auto \
  "vllm @ git+https://github.com/vllm-project/vllm.git@refs/pull/41703/head"

SGLang:

uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/23000/head#subdirectory=python"

Launch Server

vLLM:

vllm serve google/gemma-4-31B-it \
  --speculative-config '{"method": "dflash", "model": "z-lab/gemma-4-31B-it-DFlash", "num_speculative_tokens": 15, "attention_backend": "flash_attn"}' \
  --attention-backend triton_attn \
  --max-num-batched-tokens 32768 \
  --trust-remote-code

SGLang:

python -m sglang.launch_server \
  --model-path google/gemma-4-31B-it \
  --speculative-algorithm DFLASH \
  --speculative-draft-model-path z-lab/gemma-4-31B-it-DFlash \
  --speculative-num-draft-tokens 16 \
  --tp-size 1 \
  --attention-backend triton \
  --speculative-draft-attention-backend fa4 \
  --trust-remote-code

Usage

For vLLM, use port 8000. For SGLang, use port 30000.

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="EMPTY")

response = client.chat.completions.create(
    model="google/gemma-4-31B-it",
    messages=[{"role": "user", "content": "Write a quicksort in Python."}],
    max_tokens=4096,
    temperature=0.0,
    extra_body={"chat_template_kwargs": {"enable_thinking": True}},
)
print(response.choices[0].message.content)

Benchmark Results

Setup: Single NVIDIA B300 GPU per server/run, vLLM, thinking enabled, max output length 4096, greedy decoding.

Throughput and Speedup

DFlash achieves up to 5.8x speedup at concurrency 1.

Generated tokens/sec (speedup vs. autoregressive baseline)

Block Size = 16

TaskConcurrencyARDFlash
Math500177447 (5.8x)
85112650 (5.2x)
3213084962 (3.8x)
GSM8K178408 (5.3x)
85202321 (4.5x)
3213824447 (3.2x)
HumanEval176420 (5.6x)
84942389 (4.8x)
3211454139 (3.6x)
MBPP179343 (4.4x)
85352036 (3.8x)
3213893636 (2.6x)
MT-Bench179236 (3.0x)
85031334 (2.7x)
3211772257 (1.9x)

Acceptance Length

Taskc1c8c32
Math5008.598.598.62
GSM8K7.537.507.52
HumanEval8.007.897.96
MBPP6.136.136.14
MT-Bench4.234.194.19

Acknowledgements

Special thanks to David Wang for his outstanding engineering support on this project. We are also grateful to Modal, InnoMatrix, and Yotta Labs for providing the compute resources used to train this draft model.

Citation

If you find DFlash useful, please cite our work. To share feedback on DFlash or request new model support, please fill out this form: DFlash Feedback.

@article{chen2026dflash,
  title   = {{DFlash: Block Diffusion for Flash Speculative Decoding}},
  author  = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
  journal = {arXiv preprint arXiv:2602.06036},
  year    = {2026}
}

Want more deterministic results?