Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks
Cybersecurity researchers uncover flaws in popular open-source machine learning frameworks like MLflow, H2O, PyTorch, and MLeap, enabling code execution and highlighting the importance of security measures in AI/ML development.

Cybersecurity researchers have identified several vulnerabilities in popular open-source machine learning (ML) tools and frameworks, including MLflow, H2O, PyTorch, and MLeap. These security flaws could potentially allow attackers to execute malicious code.

Vulnerabilities in ML Clients

Unlike previous discoveries that focused on server-side vulnerabilities, these newly identified weaknesses reside in ML clients. These clients often have access to sensitive resources like ML Model Registries and MLOps Pipelines. By exploiting these vulnerabilities, attackers could gain access to sensitive information such as model registry credentials, enabling them to backdoor stored ML models or execute arbitrary code.

Specific Vulnerabilities and Their Impact

  • CVE-2024-27132 (CVSS score: 7.2): An insufficient sanitization issue in MLflow that could lead to a cross-site scripting (XSS) attack when running an untrusted recipe in a Jupyter Notebook. This could ultimately result in client-side remote code execution (RCE).
  • CVE-2024-6960 (CVSS score: 7.5): An unsafe deserialization issue in H2O when importing an untrusted ML model, potentially leading to RCE.
  • A path traversal issue in PyTorch’s TorchScript feature could cause a denial-of-service (DoS) or code execution due to arbitrary file overwrite. This could allow overwriting critical system files or legitimate pickle files (No CVE identifier).
  • CVE-2023-5245 (CVSS score: 7.5): A path traversal issue in MLeap when loading a saved model in zipped format can lead to a Zip Slip vulnerability, resulting in arbitrary file overwrite and potential code execution.

Safe Model Formats Not a Guarantee of Safety

Researchers emphasize that even loading ML models from seemingly safe formats like Safetensors can be risky. These models can potentially be manipulated to achieve arbitrary code execution.

Importance of Security Measures

The potential for innovation offered by AI and ML tools is undeniable. However, it is crucial to recognize the associated security risks. Organizations must be vigilant in identifying and mitigating these vulnerabilities to prevent potential damage. Implementing robust security measures and avoiding the loading of untrusted ML models are essential steps in safeguarding systems and sensitive data.

About the author

Avatar photo

Mahak Aggarwal

With a BA in Mass Communication from Symbiosis, Pune, and 5 years of experience, Mahak brings compelling tech stories to life. Her engaging style has won her the 'Rising Star in Tech Journalism' award at a recent media conclave. Her in-depth research and engaging writing style make her pieces both informative and captivating, providing readers with valuable insights.

Add Comment

Click here to post a comment

Follow us on Google News

Follow Us on Social Media

Web Stories

Latest Smartwatches that are available under Rs.5000 in January 2025! 5G Smartphones to buy under ₹10,000 in January 2025: Poco C75 and more Best Speakers Under ₹5,000 in January 2025! Looking for the best smartphones under ₹30,000 in January 2025! Best Smartwatch You can consider in january 2025: Amazfit GTS 2, Titan Celestor & More! Best Mobile Phones Under 40,000 in December 2024: Redmi Note 14 Pro+ & More!