A flaw was found in InstructLab. The linux_train.py script hardcodes trust_remote_code=True when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run ilab train/download/generate with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.
To mitigate this issue, only use models from trusted sources when performing instructlab operations. Review the origin and integrity of any HuggingFace model before using it with ilab train/download/generate. Consider running instructlab commands within a sandboxed or isolated environment to limit the potential impact of executing untrusted code.