CVE-2026-42203 PUBLISHED

LiteLLM: Server-Side Template Injection in /prompts/test endpoint

Assigner: GitHub_M
Reserved: 25.04.2026 Published: 08.05.2026 Updated: 08.05.2026

LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.

Metrics

CVSS Vector: CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N
CVSS Score: 8.6

Product Status

Vendor BerriAI
Product litellm
Versions
  • Version >= 1.80.5, < 1.83.7 is affected

References

Problem Types

  • CWE-1336: Improper Neutralization of Special Elements Used in a Template Engine CWE