No advisories yet.
Solution
No solution given by the vendor.
Workaround
To mitigate this issue, only use models from trusted sources when performing `instructlab` operations. Review the origin and integrity of any HuggingFace model before using it with `ilab train/download/generate`. Consider running `instructlab` commands within a sandboxed or isolated environment to limit the potential impact of executing untrusted code.
Thu, 23 Apr 2026 00:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| References |
| |
| Metrics |
threat_severity
|
threat_severity
|
Wed, 22 Apr 2026 13:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise. | |
| Title | Instructlab: instructlab: arbitrary code execution due to hardcoded `trust_remote_code=true` | |
| First Time appeared |
Redhat
Redhat enterprise Linux Ai |
|
| Weaknesses | CWE-829 | |
| CPEs | cpe:/a:redhat:enterprise_linux_ai:3 | |
| Vendors & Products |
Redhat
Redhat enterprise Linux Ai |
|
| References |
| |
| Metrics |
cvssV3_1
|
Projects
Sign in to view the affected projects.
Status: PUBLISHED
Assigner: redhat
Published:
Updated: 2026-04-22T13:04:04.795Z
Reserved: 2026-04-22T12:54:46.753Z
Link: CVE-2026-6859
No data.
Status : Awaiting Analysis
Published: 2026-04-22T14:17:07.687
Modified: 2026-04-22T21:23:52.620
Link: CVE-2026-6859
OpenCVE Enrichment
Updated: 2026-04-22T19:30:24Z