CVE-2026-27893

HIGH

vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out

Title source: cna
STIX 2.1

Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Scores

CVSS v3 8.8
EPSS 0.0003
EPSS Percentile 10.4%
Attack Vector NETWORK
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

CISA SSVC

Vulnrichment
Exploitation none
Automatable no
Technical Impact total

Details

CWE
CWE-693
Status published
Products (3)
pypi/vllm 0.10.1 - 0.18.0PyPI
vllm/vllm 0.10.1 - 0.18.0
vllm-project/vllm >= 0.10.1, < 0.18.0
Published Mar 27, 2026
Tracked Since Mar 27, 2026