CVE-2025-62164

HIGH

Vllm < 0.11.1 - Out-of-Bounds Write

Title source: rule

Description

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

Scores

CVSS v3 8.8
EPSS 0.0011
EPSS Percentile 29.2%
Attack Vector NETWORK
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

Classification

CWE
CWE-502 CWE-787 CWE-20 CWE-123
Status published

Affected Products (4)

vllm/vllm < 0.11.1
vllm/vllm
vllm/vllm
pypi/vllm < 0.11.1PyPI

Timeline

Published Nov 21, 2025
Tracked Since Feb 18, 2026