Hugging faceThe Artificial Intelligence (AI) and Machine Learning (Ml) Hub, is said to control malicious ml models. A cybersecurity research firm discovered two such models that contain code that can be used to package and distribute malware to that who download these files. As per the results, threat actors are using a hard-to-detect method, dubbed pickle file serialization, to insert malicious software. The Researchers Claimed to have reported the Malicious Ml Models, and Hugging Face have removed them from the platform.
Researches Discover Malicious Ml Models in Hugging Face
Reversinglabs, A Cybersecurity Research Firm, discovered The Malicious Ml Models and Detailed The New Explit Being Used By Threat Actor on Hugging Face. Notably, A Large Number of developers and companies host open-source ai models on the platform that can be downloaded and used by others.
The firm discovered that modus opendi of the exploit involves using pickle file serialization. For the unaware, ml models are stored in a variety of data serialization formats, which can be shared and reused. Pickle is a python module that is used for serialising and deserialising ml model data. It is generally Considered an unsafe data format as python code can be executed during the deserialization process.
In Closed Platforms, Pickle Files Have Access to Limited Data that Comes from Trusted sources. However, Since Hugging Face is an open-source platform, these files are used broadly allowing attackers to Abuse the system to hide malware payloads.
DURING The Investigation, The Firm Found two models on hugging face that contained malicious code. However, these ml models were said to escape the platform’s security measures and was not flagged as unsafe. The researchers named the Technique of Inserting Mallifai “Nullifai” as “It Involves Evading Existing Protections in the Ai Community for An Ml Model.”
These models were stored in pytorch format, which is essentially a compressed pickle file. The Researchers found that the models were compressed using the 7z format which prevended them from from being loaded using pytorch’S “Torch.load ()” function. This compression also prevested hugging face’s picklescan tool from detecting the malware.
The researchers claimd that this exploit can be dangerous as unsuspecting developers who download these models will unknownly end up installing the malware on their devices. The cybersecurity firm reported the feel to the hugging face security team on January 20 and claimed that the models was removed in less than 24 hours. Additional, the platform is said to have made changes to the picklescan tool to better identify such thoughts in “Broken ‘pickle files.
(Tagstotranslate) Hugging face Malicious Machine Learning Ml Model Discovered Cybersecurity Report Hugging Face (T) Ai (T) Ai (T) Artificial Intelligence (T) Ml (T) Ml (T) Malware (T) Malware