Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and prior to 0.8.5, having vLLM integration with mooncake, are vulnerable to remote code execution due to using pickle based serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack. vLLM instances that do not make use of the mooncake integration are not vulnerable. This issue has been patched in version 0.8.5.
References (4)
Core 4
Core References
Exploit, Vendor Advisory x_refsource_confirm
https://github.com/vllm-project/vllm/security/advisories/GHSA-hj4w-hm2g-p6w5
Not Applicable x_refsource_misc
https://github.com/vllm-project/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7
Patch x_refsource_misc
https://github.com/vllm-project/vllm/commit/a5450f11c95847cf51a17207af9a3ca5ab569b2c
Scores
CVSS v3
10.0
EPSS
0.0248
EPSS Percentile
85.3%
Attack Vector
NETWORK
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H
CISA SSVC
Vulnrichment
Exploitation
none
Automatable
yes
Technical Impact
total
Details
CWE
CWE-502
Status
published
Products (2)
pypi/vllm
0.6.5 - 0.8.5PyPI
vllm/vllm
0.6.5 - 0.8.5
Published
Apr 30, 2025
Tracked Since
Feb 18, 2026