CVE-2026-44222 PUBLISHED

vLLM: Remote DoS via Special-Token Placeholders

Assigner: GitHub_M
Reserved: 05.05.2026 Published: 12.05.2026 Updated: 12.05.2026

vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.

Metrics

CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
CVSS Score: 6.5

Product Status

Vendor vllm-project
Product vllm
Versions
  • Version >= 0.6.1, < 0.20.0 is affected

References

Problem Types

  • CWE-129: Improper Validation of Array Index CWE