Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network interfaces will allow attackers to execute remote code on distributed hosts. This is a remote code execution vulnerability impacting any deployments using Mooncake to distribute KV across distributed hosts. This vulnerability is fixed in 0.8.0.
Exploits (1)
github
WORKING POC
by manus-use · postscriptpoc
https://github.com/manus-use/cve-pocs/tree/main/vllm-CVE-2025-29783
References (3)
Scores
CVSS v3
9.0
EPSS
0.0170
EPSS Percentile
82.3%
Attack Vector
ADJACENT_NETWORK
CVSS:3.1/AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H
Lab Environment
COMMUNITY
Community Lab
Details
CWE
CWE-502
Status
published
Products (2)
pypi/vllm
0.6.5 - 0.8.0PyPI
vllm/vllm
0.6.5 - 0.8.0
Published
Mar 19, 2025
Tracked Since
Feb 18, 2026