Security Vulnerabilities Popping Up on Hugging Face’s AI Platform – Security Boulevard

4 minutes, 50 seconds Read

Hugging Face is emerging as a significant player in the rapidly expanding generative AI space, with its highly popular open collaboration platform being used by software developers to host machine learning models, datasets, and applications.

That popularity – Hugging Face was listed as the fourth-most popular generative AI service in 2023, according to Cloudflare – combined with the accelerated commercial adoption of generative AI technologies in general, could make Hugging Face a target of bad actors who view it in the same way they see open code repositories like npm and Python Package Index (PyPI): as a way to launch supply-chain attacks by getting their malicious code into tools that large numbers of organizations will use.

Given that, threat intelligence researchers are taking a closer look at Hugging Face, probing to find undetected vulnerabilities that could prove a threat to the platform and the developers that use it. Lasso Security in December reported finding security flaws in almost 1,700 API tokens in Hugging Face and GitHub, with the majority of them – about 1,500 – being detected on the AI platform.

Malicious Models

More recently, two new reports detailed security issues around Hugging Face. A researcher with cybersecurity firm JFrog said an analysis found about 100 instances of malicious machine learning models on the platform that can open systems to such threat as object hijacking, reverse shells, and arbitrary code execution, enabling an attacker to execute their code on a targeted system.

About 95% of these malicious models were built with PyTorch, while the other 5% used Tensorflow Keras.

Hugging Face has a laundry list of security features, from two-factor authentication and user access tokens to single sign-on and malware and secrets scanning. Its Pickle scanning function is used to protect against arbitrary code execution attacks on machine learning models.

Still, threats can get in. JFrog’s Security Research group developed a scanning tool that examines each new model uploaded to Hugging Face multiple times every day, with the goal of detecting and neutralizing emerging threats on the platform. The scan focused on model files, and found that PyTorch and Tensorflow Keras models posed the highest risk of executing malicious code.

PyTorch and Tensorflow the Highest Risks

Both are popular model types and both have had known code execution techniques published, according to David Cohen, senior security researcher with JFrog.

“It’s crucial to emphasize that when we refer to ‘malicious models,’ we specifically denote those housing real, harmful payloads,” Cohen wrote in the report. “It’s important to note that this count [of 100 malicious models] excludes false positive, ensuring a genuine representation of the distribution efforts towards producing malicious models for PyTorch and Tensorflow on Hugging Face.”

JFrog’s scan detected a PyTorch model that was uploaded – and since deleted – by a new user with the tag “baller423.” The repository, “baller423/goober2,” included a PyTorch model file that carried a malicious payload injected into the PyTorch model in a way that enables attackers to insert arbitrary Python code into the deserialization process, which could lead to malicious activity when the model is loaded, he wrote.

Often payloads embedded in models uploaded by researchers include a range of unharmful actions.

“However, in the case of the model from ‘baller423/goober2’ repository, the payload differs significantly. Instead of benign actions, it initiates a reverse shell connection to an actual IP address, 210.117.212.93,” Cohen wrote. “This behavior is notably more intrusive and potentially malicious, as it establishes a direct connection to an external server, indicating a potential security threat rather than a mere demonstration of vulnerability.”

He added that “such actions highlight the importance of thorough scrutiny and security measures when dealing with machine learning models from untrusted sources.”

Soon after the model was removed, the JFrog researchers found other instances of the same payload but different IP addresses, including an active instance with a similar model name – star23/baller13 – to the previous one.

Finding Holes in Safetensors

Earlier this month, researchers with HiddenLayer, a two-year-old startup whose product aimed to protect AI models, outlined how threat actors can compromise Hugging Face’s Safetensors service that’s designed to convert insecure machine learning models into safer versions.

In their report, HiddenLayer researchers Eoin Wickens and Kasimir Schulz wrote that “it’s possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service.”

They were able to do this by using a hijacked PyTorch model that the service bot associated with Safetensors would convert, which would let the hackers request changes to any repository on the Hugging Face platform by impersonating the conversion bot. In addition, they wrote that “it is possible to persist malicious code inside the service so that models are hijacked automatically as they are converted.”

The code for the conversion service is run on Hugging Face servers, but the system is containerized in Hugging Face Spaces, where any platform user can run code, Wickins and Schulz wrote.

They noted that some of the most popular serialization formats are vulnerable to arbitrary code execution, including Pickle, a format that is key to the PyTorch library. Hugging Face created Safetensors as a more security serialization format than Pickle and a bulwark against supply-chain risks.

The conversion service was created to convert any PyTorch model in the repository into a Safetensor model through a pull request.

To test the Safetensors environment and prove their theory, the researchers created a malicious PyTorch binary and deployed it on a local version of the conversion service rather on the live service. They noted that the Safetensors conversion bot was designed to protect the more than 500,000 models uploaded to the Hugging Face platform, but said they proved it could be hijacked.

Wickins and Schulz wrote that they alerted Hugging Face to the issue before publishing the report.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts