Keras Deep Learning Framework Vulnerability (CVE-2025-1550)

A critical security vulnerability, CVE-2025-1550, has been identified in the Keras deep learning framework, allowing for arbitrary code execution through the Model.load_model
function, even with safe_mode=True
enabled. Attackers can exploit this flaw by crafting malicious .keras
archive files, manipulating the config.json
to specify arbitrary Python modules and functions, which are executed upon loading the model. This poses significant risks, including data compromise and system control, and carries a CVSS score of 7.3, denoting high severity. The vulnerability affects Keras versions before 3.9. Users are urged to upgrade to version 3.9 or later, load models only from trusted sources, and use self-created model archives to mitigate risks. No direct Indicators of Compromise were found, with the main indicator being the loading of a malicious .keras
file.
This security issue gives an attacker the ability to execute arbitrary code on a victim's system by tricking them into loading a specially crafted Keras model file, even with Keras's safe_mode
enabled. The following protection guardrails can further prevent the following steps an attacker can take: When an attacker crafts a malicious .keras
file with a manipulated configuration to specify arbitrary Python functions for execution upon model loading, Python Deserialization Protection helps prevent these malicious functions from being called and executed, thereby blocking the initial arbitrary code execution that occurs when the Model.load_model
function processes the compromised configuration. Should this initial execution attempt to run further operating system commands, for instance, by using Python's os
or subprocess
modules to scan directories for sensitive data or execute reconnaissance tools, Python OS Command Injection Prevention steps in to block these unauthorized system-level commands. If the compromised Keras application is running within a container, and the attacker's code attempts to execute new scripts or binaries not originally part of the container image, such as downloading and running a second-stage payload like a Remote Access Trojan to establish persistence by creating a new scheduled task, Container Drift Protection (Binaries & Scripts) prevents these drifted executables from running. Finally, if the attacker, having gained code execution, attempts to establish a persistent command and control channel by initiating a direct socket connection back to their server to exfiltrate data or issue further commands, Reverse Shell Protection detects and blocks this malicious outbound connection, severing the attacker's remote access.
- T1203: Exploitation for Client Execution: The article describes a vulnerability in the Keras deep learning framework that allows for arbitrary code execution when a maliciously crafted model file is loaded. This is an example of the attacker exploiting a vulnerability to execute arbitrary code on the victim's system, which aligns with the MITRE ATT&CK technique for 'Exploitation for Client Execution'. The key aspect here is the exploitation of the
Model.load_model
function to execute code specified in a manipulatedconfig.json
file within a.keras
archive.
F1: Exploitation of Keras CVE-2025-1550 to achieve Arbitrary Code Execution (ACE) by tricking a victim into loading a maliciously crafted .keras
model file. The attack works even if safe_mode=True
is enabled during model loading.
- Attacker identifies a target system using a Keras version prior to 3.9. (Cited from: "The vulnerability affects Keras versions prior to 3.9.")
- Attacker crafts a malicious
.keras
archive file. (Cited from: "An attacker can craft a malicious.keras
archive file") - Within this
.keras
archive, the attacker manipulates theconfig.json
file. (Cited from: "by manipulating theconfig.json
file contained within it.") - In the manipulated
config.json
, the attacker specifies arbitrary Python modules and functions, along with their arguments, intended for execution. (Cited from: "This manipulation allows the attacker to specify arbitrary Python modules and functions, along with their arguments") - The attacker ensures the victim loads this malicious
.keras
file using theModel.load_model
function in their Keras application. (Cited from: "when loading a maliciously crafted model file", "The vulnerability resides within theModel.load_model
function.") - During the loading process, the
Model.load_model
function, despitesafe_mode=True
potentially being set, processes the compromisedconfig.json
. (Cited from: "It was found that even when thesafe_mode=True
option is enabled, the function is susceptible to exploitation.") - As a result, the arbitrary Python modules and functions specified by the attacker are loaded and executed on the victim's system. (Cited from: "which are then loaded and executed during the model loading process on the victim's system.")
- BR-76: Python Deserialization Protection - This mechanism is applicable because the Keras
Model.load_model
function, when processing a crafted.keras
archive with a manipulatedconfig.json
, effectively deserializes or reconstructs objects and executes specified Python functions. BR-76 is designed to limit actions Python deserialized objects can take, specifically restricting the execution of function calls originating from them, which directly addresses the execution of arbitrary Python functions specified inconfig.json
. The LLM Correlation Rule requires explicit mention of Python and 'deserialization' or CWE-502/CWE-20. While 'deserialization' isn't explicit forconfig.json
processing, the act of loading a model from a structured file and executing code based on its contents is conceptually similar and involves untrusted input leading to arbitrary function execution, fitting the mechanism's intent to block such function calls. - BR-77: Python OS Command Injection Prevention - This mechanism is applicable because the vulnerability allows the execution of arbitrary Python modules and functions. Such execution falls under 'Python command execution' as per the LLM Correlation Rule. If these executed Python functions subsequently attempt to execute system-native binaries or shell commands, BR-77 would identify and block these OS command injection attempts. The vulnerability provides the means for this type of command execution.
- BR-54: Container Drift Protection (Binaries & Scripts) - This mechanism is applicable because the vulnerability leads to Arbitrary Code Execution (ACE). If the Keras application is running inside a container, the execution of arbitrary Python modules and functions specified by the attacker constitutes new code that was not part of the original container image. BR-54 prevents the execution of such new/drifted executable code (including interpreted Python scripts or modules if they are written to disk or loaded in a way that triggers this mechanism). The LLM Correlation Rule states that if the vulnerability allows 'remote code execution' or 'attacker to run OS level commands/shell commands', the mechanism applies with hedging for containerized environments.
- BR-82: Process Runtime Execution Guardrails - This mechanism is applicable because the execution of arbitrary Python modules and functions constitutes unauthorized process execution. BR-82 is designed to prevent unauthorized new processes from starting or unauthorized code from running. The LLM Correlation Rule requires mention of 'unauthorized process execution' or 'CWE-78'/'CWE-77', which aligns with the arbitrary code execution nature of the CVE.
- BR-55: Reverse Shell Protection - This mechanism is applicable because the CVE allows for arbitrary code execution. As per the LLM Correlation Rule for BR-55, if a vulnerability leads to 'remote code execution' (which ACE is a form of), it should be assumed that an attacker can establish a reverse shell. The arbitrary Python code executed can be used to initiate a network connection back to the attacker.
- BR-76: Python Deserialization Protection - This mechanism is applicable because the Keras
- The attacker achieves arbitrary code execution on the victim's system, potentially leading to data compromise, system control, or further network penetration. (Cited from: "The successful exploitation of this vulnerability enables an attacker to execute arbitrary code on the system running the vulnerable Keras version, potentially leading to data compromise, system control, or further network penetration.")
- BR-76: Python Deserialization Protection - This mechanism is applicable because it aims to prevent the initial arbitrary Python function execution that leads to ACE, by restricting function calls originating from the model loading process (deserialization).
- BR-77: Python OS Command Injection Prevention - This mechanism is applicable because it aims to block any OS command injection attempts made by the arbitrary Python code that gets executed, thus preventing or limiting the scope of the ACE.
- BR-54: Container Drift Protection (Binaries & Scripts) - This mechanism is applicable as it seeks to prevent the execution of the arbitrary code itself if the Keras application is containerized and the code is new/drifted.
- BR-82: Process Runtime Execution Guardrails - This mechanism is applicable as it seeks to prevent the unauthorized execution of the arbitrary Python code or any subsequent processes it might spawn.
- BR-55: Reverse Shell Protection - This mechanism is applicable because it aims to prevent one of the common outcomes of ACE, which is establishing a reverse shell for command and control.