CVE-2026-44223 PUBLISHED

vLLM: extract_hidden_states speculative decoding crashes server on any request with penalty parameters

Assigner: GitHub_M
Reserved: 05.05.2026 Published: 12.05.2026 Updated: 12.05.2026

vLLM is an inference and serving engine for large language models (LLMs). From to before 0.20.0, the extract_hidden_states speculative decoding proposer in vLLM returns a tensor with an incorrect shape after the first decode step, causing a RuntimeError that crashes the EngineCore process. The crash is triggered when any request in the batch uses sampling penalty parameters (repetition_penalty, frequency_penalty, or presence_penalty). A single request with a penalty parameter (e.g., "repetition_penalty": 1.1) is sufficient to crash the server. This vulnerability is fixed in 0.20.0.

Metrics

CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
CVSS Score: 6.5

Product Status

Vendor vllm-project
Product vllm
Versions
  • Version >= 0.18.0, < 0.20.0 is affected

References

Problem Types

  • CWE-131: Incorrect Calculation of Buffer Size CWE
  • CWE-704: Incorrect Type Conversion or Cast CWE