Description
vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
References (4)
Core 4
Core References
Exploit, Vendor Advisory x_refsource_confirm
https://github.com/vllm-project/vllm/security/advisories/GHSA-wr9h-g72x-mwhm
Patch x_refsource_misc
https://github.com/vllm-project/vllm/commit/ee10d7e6ff5875386c7f136ce8b5f525c8fcef48
Product x_refsource_misc
https://github.com/vllm-project/vllm/blob/4b946d693e0af15740e9ca9c0e059d5f333b1083/vllm/entrypoints/openai/api_server.py#L1270-L1274
Release Notes x_refsource_misc
https://github.com/vllm-project/vllm/releases/tag/v0.11.0
Scores
CVSS v3
7.5
EPSS
0.0028
EPSS Percentile
51.0%
Attack Vector
NETWORK
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
CISA SSVC
Vulnrichment
Exploitation
none
Automatable
yes
Technical Impact
partial
Details
CWE
CWE-385
Status
published
Products (3)
pypi/vllm
0 - 0.11.0PyPI
vllm/vllm
0.11.0 rc1
vllm/vllm
< 0.11.0
Published
Oct 07, 2025
Tracked Since
Feb 18, 2026