1,184 Malicious Skills on ClawHub: The OpenClaw Supply Chain Attack Explained

6 min read

The OpenClaw ecosystem has a supply chain problem, and it is far worse than most users realize. Security researchers have now confirmed 1,184 malicious skills on ClawHub, the community registry where OpenClaw users discover and install agent capabilities. The campaign, dubbed ClawHavoc, represents one of the largest supply chain attacks ever targeting an AI agent platform.

If you are running OpenClaw with community-installed skills, your credentials, API keys, and private data may already be compromised. Here is everything we know about how it happened, what the attackers were after, and what you need to do right now.

What researchers found: the ClawHavoc campaign

In early February 2026, researchers at Snyk published their ToxicSkills study after months of analyzing the ClawHub registry. The findings were alarming. Out of roughly 14,000 publicly listed skills, 1,184 contained confirmed malicious payloads—an infection rate of nearly 8.5 percent.

The research was corroborated by reporting from Cyberpress and SC Media, both of which independently verified samples from the dataset. What made ClawHavoc particularly dangerous was not just its scale but its sophistication. Many of the malicious skills were functional—they actually performed their advertised tasks—while silently exfiltrating data in the background.

The attack was not the work of a single actor. Researchers identified at least seven distinct threat clusters operating on ClawHub simultaneously, ranging from opportunistic cryptominer operators to what appear to be organized credential harvesting campaigns. Some malicious skills had been live on the registry for months, accumulating thousands of installs before detection.

How supply chain attacks work in OpenClaw

To understand why this happened, you need to understand how OpenClaw skills operate. Unlike traditional software packages that run in sandboxed environments, OpenClaw skills are granted arbitrary code execution by design. When an agent invokes a skill, that skill can read the filesystem, make network requests, access environment variables, and interact with any service the host machine can reach.

This is not a bug. It is the architecture. Skills are meant to be powerful—that is what makes OpenClaw useful. A skill that manages your Kubernetes cluster needs kubectl access. A skill that sends emails needs SMTP credentials. The problem is that the same access model that makes legitimate skills powerful makes malicious ones devastating.

The installation flow compounds the risk. A typical user finds a skill on ClawHub, reads the description and maybe skims the README, then adds it to their OpenClaw configuration with a single command. There is no dependency review, no permissions prompt, no sandbox. The skill immediately has the same access as every other skill in the agent’s runtime.

This is remarkably similar to the early days of npm and PyPI, before those ecosystems implemented malware scanning and package signing. For a broader look at how these issues affect the ecosystem, see our security overview. The difference is that OpenClaw skills run with broader system access than a typical software library, making the blast radius of a single compromised skill significantly larger.

The four categories of malicious payloads

Snyk’s ToxicSkills study categorized the 1,184 malicious skills into four primary payload types:

1. Credential theft (41% of samples)

The most common payload type. These skills harvested API keys, database connection strings, cloud provider credentials, and authentication tokens from environment variables and configuration files. Many targeted .env files, ~/.aws/credentials, ~/.kube/config, and similar well-known credential stores. Stolen credentials were exfiltrated to attacker-controlled servers, typically over HTTPS to blend with normal traffic.

2. Data exfiltration (29% of samples)

These skills went beyond credentials to exfiltrate broader data: chat histories, agent conversation logs, documents the agent had access to, and in some cases entire directory trees. Several skills specifically targeted files matching patterns like *.pem, *.key, id_rsa, and *.sqlite. The implication is clear: attackers were after both immediate access (credentials) and long-term intelligence (private data).

3. Backdoors and reverse shells (19% of samples)

A significant subset installed persistent backdoors on the host system. Some dropped reverse shell binaries. Others modified cron jobs or systemd services to maintain access even if the skill itself was later removed. Researchers found several skills that installed SSH keys into ~/.ssh/authorized_keys, giving attackers direct remote access to the host—a particularly dangerous payload for the 40,000+ exposed OpenClaw instances already running without proper network controls.

4. Cryptominers (11% of samples)

The least sophisticated but most immediately noticeable payload type. These skills downloaded and executed cryptocurrency mining binaries, typically XMRig variants configured to mine Monero. While less damaging than credential theft, cryptominers consumed significant compute resources and often served as an indicator that a system had been compromised by other payloads as well.

Why ClawHub’s review process failed

The blunt answer: there was no meaningful review process.

ClawHub operates as an open registry. Anyone can publish a skill by pushing to a repository and registering it. There is no code review, no automated malware scanning, no signature verification, and no maintainer identity validation. The only barrier to entry is having a GitHub account.

This is a deliberate design choice rooted in OpenClaw’s open-source ethos. The project prioritized permissionless contribution and rapid ecosystem growth over gatekeeping. For the first several months, that approach worked—the community was small enough that bad actors had limited incentive to target it.

That changed when OpenClaw crossed 180,000 GitHub stars and deployment numbers surged into the hundreds of thousands. Suddenly, a single malicious skill published on ClawHub could reach tens of thousands of hosts within days. The registry had become a high-value target with no defenses.

Making matters worse, several of the ClawHavoc skills used typosquatting and namespace confusion tactics familiar from npm and PyPI attacks. Skills with names like offiical-gmail-skill (note the misspelling) or openclaww-web-browser capitalized on users who typed quickly and did not scrutinize publisher identities. Others simply cloned popular legitimate skills, injected a few lines of malicious code, and republished under slightly different names.

Snyk’s ToxicSkills research: key takeaways

The ToxicSkills study went beyond cataloging malware. Several findings deserve particular attention:

  • Time to detection was 47 days on average. The longest-lived malicious skill had been active for 112 days before being flagged. During that window, it accumulated over 9,000 installs.
  • 68% of malicious skills had functional cover stories. They performed their advertised task while running payloads in the background, making them harder to detect through casual usage.
  • Obfuscation was minimal. Most payloads used straightforward code without heavy obfuscation, suggesting the attackers knew that no one was looking. Basic static analysis would have caught the majority of samples.
  • Exfiltration endpoints were clustered. Researchers identified 23 unique command-and-control domains across the 1,184 samples, with 4 domains accounting for over 60% of callbacks. This suggests a smaller number of organized groups rather than thousands of independent actors.
  • The attack surface is growing. Snyk noted a sharp increase in new skill publications since December 2025, with the proportion of suspicious submissions rising faster than legitimate ones.

What you should do right now

If you are running a self-hosted OpenClaw instance with community-sourced skills, take these steps immediately:

  1. Audit every installed skill. List all skills in your OpenClaw configuration and verify each one against its source repository. Check the publisher’s identity, the repository’s creation date, and whether the skill’s name matches the canonical version.
  2. Remove anything you cannot verify. If a skill comes from an unknown publisher, has a suspiciously recent repository, or you cannot inspect its source code, remove it. The risk is not worth the convenience.
  3. Rotate credentials. If you had any unvetted skills installed, assume your environment variables and credential files were exfiltrated. Rotate API keys, cloud credentials, database passwords, and SSH keys.
  4. Check for persistence mechanisms. Look for unexpected cron jobs, systemd services, SSH authorized keys, and unfamiliar processes. Malicious skills may have installed backdoors that persist after the skill is removed.
  5. Restrict network egress. Configure firewall rules so your OpenClaw instance can only reach the specific services it needs. Block all other outbound traffic to prevent data exfiltration.
  6. Consider a managed platform with skills vetting. Self-hosting means self-defending. If you do not have the security resources to continuously audit your skill supply chain, a managed hosting platform with built-in vetting eliminates this entire attack surface. See our pricing page for plan details.

How KiwiClaw prevents supply chain attacks

We built KiwiClaw with the assumption that the skills ecosystem would eventually be targeted. The ClawHavoc campaign confirmed what we considered inevitable. Our skills vetting pipeline is designed to make this class of attack impossible on our platform.

Every skill available on KiwiClaw goes through a multi-stage review process before it reaches any customer instance:

  • Automated static analysis. All skill code is scanned for known malicious patterns, suspicious network calls, filesystem access outside expected paths, and obfuscated code segments. This catches the low-hanging fruit that comprised the majority of ClawHavoc payloads.
  • Dynamic sandbox execution. Skills are executed in an isolated environment with monitored network traffic, filesystem access logging, and process tracking. Any unexpected behavior—outbound connections to unknown hosts, access to credential files, process spawning—triggers a block.
  • Maintainer identity verification. We verify the identity and reputation of skill publishers. Anonymous or newly created accounts cannot publish skills to our registry without additional review.
  • Continuous monitoring. Skills are not just scanned once at submission. We re-analyze the entire catalog on a rolling basis and monitor for behavioral changes in skill updates. A skill that passes initial review but introduces malicious code in a later version gets caught.
  • Global blocklist. Known malicious skills, domains, and publisher accounts are maintained in a shared blocklist that applies across all KiwiClaw instances. When one customer’s scan detects a threat, every customer is protected within minutes.

The result: zero malicious skills have reached production on KiwiClaw. We reject an average of 12 suspicious skill submissions per week, and every rejection is logged, analyzed, and fed back into our detection models.


The ClawHavoc campaign is a wake-up call for the OpenClaw community. An 8.5% malware infection rate on the primary skill registry is not a minor incident—it is a systemic failure that puts every self-hosted instance at risk. The convenience of install skill is not worth the cost if one in twelve skills is actively stealing your data.

The OpenClaw ecosystem needs the same supply chain security infrastructure that took the npm and PyPI communities years to build. Until that exists, the safest path is to run your agent on a platform that has already solved this problem.

Related Reading

AR
Amogh Reddy
Founder, KiwiClaw · @AireVasant

Ready for secure OpenClaw hosting?

No infrastructure, no setup, no risks. Your agent is live in 60 seconds.