CVE-2026-6859 PUBLISHED

Instructlab: instructlab: arbitrary code execution due to hardcoded `trust_remote_code=true`

Assigner: redhat
Reserved: 22.04.2026 Published: 22.04.2026 Updated: 22.04.2026

A flaw was found in InstructLab. The linux_train.py script hardcodes trust_remote_code=True when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run ilab train/download/generate with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.

Metrics

CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
CVSS Score: 8.8

Product Status

Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected
Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected
Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected
Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected
Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected
Vendor Red Hat
Product Red Hat Enterprise Linux AI (RHEL AI) 3
Versions Default: affected

Workarounds

To mitigate this issue, only use models from trusted sources when performing instructlab operations. Review the origin and integrity of any HuggingFace model before using it with ilab train/download/generate. Consider running instructlab commands within a sandboxed or isolated environment to limit the potential impact of executing untrusted code.

Credits

  • Red Hat would like to thank Martin Brodeur (independent security researcher) for reporting this issue.

References

Problem Types

  • Inclusion of Functionality from Untrusted Control Sphere CWE