JFrog, a leading provider of CI/CD software services, has released a security report warning about the spread of LLM models embedding malware or backdoors through the popular website Hugging Face.
The issue of software distribution with hidden malware on e-commerce exchange websites, known as a supply chain attack, has been a persistent and escalating problem. Examples include PyPI, npm, GitHub, and most recently, Hugging Face, a site known for disseminating AI models in line with global technology trends.
Modern AI models come in various formats such as JSON, XML, and binary files like Python’s pickle files that can harbor malicious code execution upon download and execution.
Acknowledging this vulnerability, Hugging Face has implemented multiple security measures, including malware scans of hosted files and the development of a new secure file format called safetensors to prevent malicious injections.
However, despite Hugging Face’s preventive measures, limitations exist, as discovered by JFrog’s audit, revealing approximately 100 models at risk of containing malware or malicious code. Commonly affected files include PyTorch and TensorFlow models, popular software in the AI industry.
Table showcasing various formats of AI model files
Source: JFrog
TLDR: JFrog warns of security risks posed by the distribution of AI models containing malware or malicious code through platforms like Hugging Face amidst a growing trend of supply chain attacks. Despite mitigation efforts, vulnerabilities persist in popular AI software formats like PyTorch and TensorFlow.
Leave a Comment