EXAONE 4.5 33B
EXAONE 4.5 33B by LGAI-EXAONE, a image-text-to-text model with multimodal capabilities. Understand and compare multimodal features, benchmarks, and capabilities.
Comparison
| Feature | EXAONE 4.5 33B | Interfaze |
|---|---|---|
| Input Modalities | image, text | image, text, audio, video, document |
| Native OCR | No | Yes |
| Long Document Processing | No | Yes |
| Language Support | 6 partial | 162+ |
| Native Speech-to-Text | No | Yes |
| Native Object Detection | No | Yes |
| Guardrail Controls | No | Yes |
| Context Input Size | 262.1K | 1M |
| Tool Calling | Yes | Tool calling supported + built in browser, code execution and web search |
Scaling
| Feature | EXAONE 4.5 33B | Interfaze |
|---|---|---|
| Scaling | Self-hosted/Provider-hosted with quantization | Unlimited |
View model card on Hugging Face
We introduce EXAONE 4.5, the first open-weight vision language model developed by LG AI Research. Integrating a dedicated visual encoder into the existing EXAONE 4.0 framework, we expand the model's capability toward multimodality. EXAONE 4.5 features 33 billion parameters in total, including 1.2 billion parameters from the vision encoder. EXAONE 4.5 achieves competitive performance in general benchmark while outperforming SOTA models of similar size in document understanding and Korean contextual reasoning, inheriting powerful language capabilities from our previous language models.
For more details, please refer to the technical report, blog and GitHub.
Model Configuration
- Model Type: Causal Language Model + Vision Encoder
- Number of Parameters (Language Model): 31.7B
- Number of Parameters (Vision Encoder): 1.29B
- Hidden Dimension: 5,120
- Intermediate size: 27,392
- Number of Layers: 64 Main layers + 1 MTP layers
- Hybrid Attention Pattern: 16 x (3 Sliding window attention + 1 Global attention)
- Reordered Norm: Apply normalization after Attention/MLP, and before residual connection
- Sliding Window Attention
- Number of Attention Heads: 40 Q-heads and 8 KV-heads
- Head Dimension: 128 for both Q/KV
- Sliding Window Size: 4096
- Global Attention
- Number of Attention Heads: 40 Q-heads and 8 KV-heads
- Head Dimension: 128 for both Q/KV
- No Rotary Positional Embedding Used (NoPE)
- Vision Encoder
- Grouped Query Attention (GQA)
- 2D RoPE for vision embeddings
- Vocab Size: 153,600
- Context Length: 262,144 tokens
- Knowledge Cutoff: Dec 2024 (2024/12)
Evaluation Results
Vision-Language Tasks
Language-only Tasks
Quickstart
Serving EXAONE 4.5
For better inference speed and memory usage, it is preferred to serve the model using optimized inference engines. The EXAONE 4.5 model is supported by various frameworks, including TensorRT-LLM, vLLM, SGLang, and llama.cpp. Support will be expanded in the future.
Practically, you can serve the EXAONE 4.5 model with 256K context length on single H200 GPU, or 4x A100-40GB GPUs by using a tensor-parallelism.
TensorRT-LLM
TensorRT-LLM provides zero day support for EXAONE 4.5. Transformers library of our fork is required to utilize EXAONE 4.5 model. You can install Transformers by running the following commands:
pip install git+https://github.com/nuxlear/transformers.git@add-exaone4_5
Please refer to the official installation guide, and EXAONE documentations, and EXAONE 4.5 PR for the detail.
After you install the TensorRT-LLM, you can launch the server with the following code snippet. You can remove unnecessary arguments from the snippet.
trtllm-serve LGAI-EXAONE/EXAONE-4.5-33B \
—tp_size 2 \
—port 8000 \
—reasoning_parser qwen3
An OpenAI-compatible API server will be available at http://localhost:8000/v1.
vLLM
Both Transformers and vLLM of our forks are required to utilize EXAONE 4.5 model. You can install the requirements by running the following commands:
uv pip install git+https://github.com/lkm2835/vllm.git@add-exaone4_5
uv pip install git+https://github.com/nuxlear/transformers.git@add-exaone4_5
After you install the vLLM, you can launch the server with the following code snippet. You can remove unnecessary arguments from the snippet.
vllm serve LGAI-EXAONE/EXAONE-4.5-33B \
--served-model-name EXAONE-4.5-33B \
--port 8000 \
--tensor-parallel-size 2 \
--max-model-len 262144 \
--reasoning-parser qwen3 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--limit-mm-per-prompt '{"image": 64}' \
--speculative_config '{
"method": "mtp",
"num_speculative_tokens": 3
}'
An OpenAI-compatible API server will be available at http://localhost:8000/v1.
SGLang
Both Transformers and SGLang of our forks are required to utilize EXAONE 4.5 model. You can install the requirements by running the following commands:
uv pip install git+https://github.com/lkm2835/sglang.git@add-exaone4_5
uv pip install git+https://github.com/nuxlear/transformers.git@add-exaone4_5
After you install the SGLang, you can launch the server with the following code snippet. You can remove unnecessary arguments from the snippet.
python -m sglang.launch_server \
--model-path LGAI-EXAONE/EXAONE-4.5-33B \
--served-model-name EXAONE-4.5-33B \
--port 8000 \
--tp-size 2 \
--mem-frac 0.81 \
--reasoning-parser qwen3 \
--tool-call-parser hermes \
--speculative-algorithm EAGLE \
--speculative-num-steps 3 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 4
An OpenAI-compatible API server will be available at http://localhost:8000/v1.
Using EXAONE 4.5
After launching the OpenAI-compatible server with EXAONE 4.5, you can seamlessly use the model via API with a single code integration, even though the serving framework has changed. To use OpenAI Python SDK and following examples, you should install the openai library on your environment.
[!IMPORTANT] To achieve the expected performance, we recommend using the following configurations:
- We recommend to use
temperature=1.0,top_p=0.95,presence_penalty=1.5for general purpose.- We recommend to use
temperature=0.6,top_p=0.95,presence_penalty=1.5,top_k=20for OCR/document-related tasks, and Korean inputs.- We recommend to use
temperature=1.0,top_p=0.95for text-only inputs.- Different from EXAONE-4.0, EXAONE 4.5 uses
enable_thinking=Trueas default. Thus, you need to setenable_thinking=Falsewhen you want to use non-reasoning mode.- EXAONE 4.5 prefers using
\boxed{}format to answer the question. We recommend using this format with the corresponding format instruction for better parsing accuracy.
You can easily try model's chat completions by using OpenAI Python SDK. For your server in local machine, you will need to change your base_url and api_key for the OpenAI client.
Image-Text QA
Reasoning mode
For tasks that require accurate results, you can run the EXAONE 4.5 model in reasoning mode as follows.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="EMPTY",
)
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://github.com/Aim-Highest/EXAONE-4.5/blob/main/assets/exaone45_input2.png?raw=true",
},
},
{
"type": "text",
"text": "How much larger is the model released in winter 2025 compared with the one released in summer 2024?",
},
]
}
]
response = client.chat.completions.create(
model="EXAONE-4.5-33B",
messages=messages,
max_tokens=32768,
temperature=1.0,
top_p=0.95,
presence_penalty=1.5,
extra_body={
"chat_template_kwargs": {
"enable_thinking": True, # default: True
}
},
)
print(response)
Non-reasoning mode
For tasks where latency matters more than accuracy, you can run the EXAONE 4.5 model in non-reasoning mode as follows.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="EMPTY",
)
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://github.com/Aim-Highest/EXAONE-4.5/blob/main/assets/exaone45_input1.jpg?raw=true",
},
},
{
"type": "text",
"text": "What dish is the person preparing, and how is it made?",
},
]
}
]
response = client.chat.completions.create(
model="EXAONE-4.5-33B",
messages=messages,
max_tokens=32768,
temperature=1.0,
top_p=0.95,
presence_penalty=1.5,
extra_body={
"chat_template_kwargs": {
"enable_thinking": False, # default: True
}
},
)
print(response)
Text-only QA
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="EMPTY",
)
messages = [
{
"role": "user",
"content": "Explain how useful you are.",
}
]
response = client.chat.completions.create(
model="EXAONE-4.5-33B",
messages=messages,
max_tokens=32768,
temperature=1.0,
top_p=0.95,
extra_body={
"chat_template_kwargs": {
"enable_thinking": True, # default: True
}
},
)
print(response)
Agentic Use
The following example demonstrates the agentic capability of EXAONE 4.5 for image-text inputs. You can use your own agents, skills, or other harnesses with the EXAONE 4.5 model.
import os
import asyncio
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
def print_message(msg):
parts = msg.content if isinstance(msg.content, list) else [{"type": "text", "text": msg.content or ""}]
text_out, reasoning_out = [], []
for p in parts:
if isinstance(p, dict):
if p.get("type") in ("text", "output_text") and p.get("text"):
text_out.append(p["text"])
elif p.get("type") in ("reasoning", "reasoning_text") and p.get("text"):
reasoning_out.append(p["text"])
if reasoning_out:
print("\n[assistant_reasoning_content]")
print("\n".join(reasoning_out))
if text_out:
print("\n[assistant_content]")
print("\n".join(text_out))
async def main():
model = ChatOpenAI(
model="EXAONE-4.5-33B",
base_url="http://localhost:8000/v1",
api_key="EMPTY",
temperature=1.0,
model_kwargs={"top_p": 0.95},
)
client = MultiServerMCPClient({
"filesystem": {
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
},
"fetch": {
"transport": "stdio",
"command": "uvx",
"args": ["mcp-server-fetch"],
},
"duckduckgo": {
"transport": "stdio",
"command": "uvx",
"args": ["duckduckgo-mcp-server"],
},
})
agent = create_agent(model, await client.get_tools())
inputs = {
"messages": [{
"role": "user",
"content": [
{
"type": "text",
"text": (
"Look at the image and identify the landmark. "
"Use the DuckDuckGo MCP tool to verify its name, height, and location. "
"Then use the fetch tool to read a fuller article page about it. "
"Create /tmp/mcp-demo and write a short markdown file to "
"/tmp/mcp-demo/landmark.md with: name, location, height, and a one-sentence summary of the article. "
"Finally, return only the exact file content."
),
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/a/a8/Tour_Eiffel_Wikimedia_Commons.jpg"
},
},
],
}]
}
async for step in agent.astream(inputs, stream_mode="values"):
msg = step["messages"][-1]
if getattr(msg, "type", "") == "ai":
print_message(msg)
for tc in getattr(msg, "tool_calls", []) or []:
print(f"\n[tool call] {tc['name']}({tc['args']})")
if __name__ == "__main__":
asyncio.run(main())
Limitation
EXAONE 4.5 models, like all existing multimodal models, have certain limitations and may occasionally generate inappropriate responses. The multimodal model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we make every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE 4.5 models does not reflect the views of LG AI Research.
- Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
- Biased responses may be generated, which are associated with age, gender, race, and so on.
- The generated responses rely heavily on statistics from the training data, which can result in the generation of semantically or syntactically incorrect sentences.
- Since the models do not reflect the latest information, the responses may be false or contradictory.
LG AI Research strives to reduce potential risks that may arise from EXAONE 4.5 models. Users are not allowed to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate outputs violating LG AI’s ethical principles when using EXAONE 4.5 models.
License
The model is licensed under EXAONE AI Model License Agreement 1.2 - NC
Citation
@article{exaone-4.5,
title={EXAONE 4.5 Technical Report},
author={{LG AI Research}},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2026}
}
Contact
LG AI Research Technical Support: contact_us@lgresearch.ai