Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Scores
CVSS v3
6.5
EPSS
0.0057
EPSS Percentile
68.8%
Attack Vector
NETWORK
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
CISA SSVC
Vulnrichment
Exploitation
poc
Automatable
no
Technical Impact
partial
Details
CWE
CWE-1333
Status
published
Products (2)
pypi/vllm
0.8.0 - 0.8.5PyPI
vllm/vllm
0.8.0 - 0.8.5
Published
Apr 30, 2025
Tracked Since
Feb 18, 2026