CVE-2026-27893 PUBLISHED

vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out

Assigner: GitHub_M
Reserved: 24.02.2026 Published: 26.03.2026 Updated: 27.03.2026

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode trust_remote_code=True when loading sub-components, bypassing the user's explicit --trust-remote-code=False security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Metrics

CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
CVSS Score: 8.8

Product Status

Vendor vllm-project
Product vllm
Versions
  • Version >= 0.10.1, < 0.18.0 is affected

References

Problem Types

  • CWE-693: Protection Mechanism Failure CWE