CVE-2025-62164

HIGH

Vllm < 0.11.1 - Out-of-Bounds Write

Title source: rule
STIX 2.1

Description

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

References (3)

Core 3

Scores

CVSS v3 8.8
EPSS 0.0019
EPSS Percentile 40.7%
Attack Vector NETWORK
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

CISA SSVC

Vulnrichment
Exploitation none
Automatable no
Technical Impact total

Details

CWE
CWE-502 CWE-787 CWE-20 CWE-123
Status published
Products (3)
pypi/vllm 0.10.2 - 0.11.1PyPI
vllm/vllm 0.11.1 rc0 (2 CPE variants)
vllm/vllm 0.10.2 - 0.11.1
Published Nov 21, 2025
Tracked Since Feb 18, 2026