CVE-2026-30304 PUBLISHED

Assigner: mitre
Reserved: 04.03.2026 Published: 27.03.2026 Updated: 27.03.2026

In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.

Product Status

Vendor n/a
Product n/a
Versions
  • Version n/a is affected

References

Problem Types

  • n/a text