Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
References (4)
Scores
CVSS v3
9.8
EPSS
0.0009
EPSS Percentile
24.8%
Attack Vector
NETWORK
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CISA SSVC
Vulnrichment
Exploitation
none
Automatable
yes
Technical Impact
total
Details
CWE
CWE-532
Status
published
Products (2)
pypi/vllm
0.8.3 - 0.14.1PyPI
vllm/vllm
0.8.3 - 0.14.1
Published
Feb 02, 2026
Tracked Since
Feb 18, 2026