ClawHub vs KiwiClaw Marketplace: Which OpenClaw Skills Are Safe?
The OpenClaw skills ecosystem exploded in 2026. In January alone, over 2,000 new skills were published to ClawHub, the primary open registry. By February, the total catalog exceeded 8,000 skills covering everything from browser automation to database management to calendar integrations.
Then came the security reports. Researchers discovered 1,184 malicious skills on ClawHub — part of a coordinated campaign called ClawHavoc. These skills stole API keys, exfiltrated files, deployed cryptominers, and opened persistent backdoors. Many had been live for weeks before detection. The scale of the attack raised a fundamental question: can you trust any skill from an open registry?
This post compares the two approaches to skill distribution — ClawHub's open model and KiwiClaw's curated marketplace — and lays out what each one offers and what each one risks.
The State of OpenClaw Skills in 2026
OpenClaw's power comes from its skill system. A skill is a package that extends what an agent can do: connect to Slack, browse the web, manage files, interact with APIs. Skills execute arbitrary code with the full permissions of the host agent. There is no permission model, no capability system, no sandbox at the skill level. A skill that says it manages your calendar has the same system access as a skill that exfiltrates your SSH keys.
This architecture made OpenClaw extraordinarily flexible. It also made the skills ecosystem a target. When every skill runs with root-equivalent access and there are thousands of them published by anonymous authors, the attack surface is enormous.
ClawHub: Open Publishing, Community Flagging
ClawHub is the default skills registry for OpenClaw. Anyone can publish a skill by creating a repository with a SKILL.md manifest and registering it with ClawHub. There is no mandatory review process. Skills go live immediately upon submission.
ClawHub's security model relies on:
- Community flagging. Users can report suspicious skills. Flagged skills are reviewed by volunteer maintainers.
- Open source. Skill code is publicly visible, so anyone can audit it before installing.
- Install counts. Popular skills have more eyeballs and are more likely to have been reviewed.
- Author reputation. Established authors with multiple well-known skills carry implicit trust.
This model has real advantages. It is fast — new skills are available immediately. It is permissionless — anyone can contribute. And it scales well — the community does the review work, not a central authority.
The disadvantage is that it does not work against sophisticated attackers. The ClawHavoc campaign demonstrated this clearly.
The ClawHavoc Campaign: What Actually Happened
In January 2026, security researchers disclosed ClawHavoc — a supply chain attack that planted 1,184 malicious skills on ClawHub over a period of several weeks. The attack was sophisticated:
- Legitimate-sounding names. Skills were named things like
slack-enhanced-bridge,github-issue-helper, andcalendar-sync-pro— names that look like real utilities. - Real functionality included. The malicious skills actually worked. They did what they claimed to do. The malicious behavior was bundled alongside legitimate code, making source review harder.
- Delayed execution. Payloads activated only after a waiting period, or under specific conditions (certain environment variables present, certain time of day). This evaded anyone who tested the skill briefly before deploying it.
- Minimal obfuscation. Instead of heavy encryption or packing, the attackers used subtle techniques — dynamically constructed URLs, environment variable exfiltration disguised as logging, network calls hidden in error handlers.
Community flagging caught none of the 1,184 skills before the coordinated disclosure. The skills had accumulated real installs and positive reviews. The open model failed at the exact scenario it was most needed for: a well-resourced, patient attacker targeting the ecosystem at scale.
"Just Be Careful" Is Not a Security Strategy
The common response to supply chain concerns is to advise caution: read the code before installing, only use popular skills, check the author's reputation. This advice is well-intentioned and insufficient.
The average OpenClaw user installs 8-12 skills. Each skill may have dependencies. Auditing the full dependency tree of every skill you install would take hours of expert security review — time that most users do not have and most developers cannot justify.
More importantly, the ClawHavoc skills were designed to pass human review. The malicious code was not in obvious places. It was not flagged by standard linters. It looked like normal application code. Telling users to "just read the code" when professional security researchers needed weeks of investigation to identify the threat is not a viable strategy.
KiwiClaw's Approach: 6-Pass Vetting Scanner
The KiwiClaw marketplace takes a fundamentally different approach. No skill reaches your agent without passing through an automated vetting pipeline followed by human review when needed. Here is what each pass checks:
Pass 1: Manifest Validation
Every skill must have a valid SKILL.md manifest with required fields: name, version, description, author, and declared permissions. Skills with missing or malformed manifests are rejected immediately. This catches hastily assembled packages and ensures basic metadata is present for every skill in the catalog.
Pass 2: Static Code Analysis (AST Scanning)
The scanner parses the skill's source code into an Abstract Syntax Tree and checks for dangerous patterns: eval, exec, dynamic require/import, encoded payloads, direct network calls not declared in the manifest, filesystem access outside the skill's working directory, and known malicious code signatures from the ClawHavoc indicator set.
Pass 3: Dependency Audit
Every dependency listed in the skill's package.json is checked against known vulnerability databases (NVD, GitHub Advisory, npm audit). Skills with dependencies that have known critical or high-severity vulnerabilities are flagged. Transitive dependencies are included — if your skill depends on a clean package that depends on a compromised one, we catch it.
Pass 4: Network Permission Review
The scanner extracts every domain, IP address, and URL referenced in the skill's code and compares it against the skill's declared integrations. A skill that claims to integrate with Slack should talk to slack.com and the Slack API. If it also contacts analytics.suspicious-domain.io, that discrepancy is flagged for review.
Pass 5: Sandbox Execution Test
The skill is executed in an isolated environment with full system call monitoring. The sandbox tracks network behavior, filesystem access, process spawning, resource consumption, and timing-based triggers. This is the pass that catches ClawHavoc-style delayed payloads — the sandbox runs skills under varied conditions and simulated time windows specifically to trigger deferred malicious behavior.
Pass 6: Human Review for Flagged Packages
If any of the automated passes produce a flag, the skill goes to a human reviewer. The reviewer examines the flagged code paths, assesses intent, and makes a final determination. Legitimate skills that need network access or file I/O are cleared with annotations explaining their permissions. Suspicious skills are rejected with detailed feedback to the author.
Comparison Table
| Feature | ClawHub | KiwiClaw Marketplace |
|---|---|---|
| Publishing speed | Instant | 5 min - 48 hours (automated + human review) |
| Publishing cost | Free | Free |
| Catalog size | 8,000+ skills | 560+ vetted skills |
| Security scanning | None (community flagging) | 6-pass automated + human review |
| Malicious skill detection | Post-incident (after community reports) | Pre-publication (before skill reaches users) |
| Revocation | Manual (maintainer action required) | Instant (automated across all instances) |
| Continuous monitoring | No | Yes (re-scan on updates and new threat intel) |
| Permission transparency | Author-declared only | Scanner-verified permissions with annotations |
| Org-level governance | No | Yes (allowlists, blocklists, admin approval) |
Can They Coexist?
Yes, and they should. ClawHub and the KiwiClaw marketplace serve different purposes.
ClawHub is for discovery and experimentation. If you are a developer testing a new integration, prototyping a workflow, or evaluating skills before committing to them, ClawHub's open model and massive catalog are valuable. The breadth of available skills is unmatched.
KiwiClaw is for production. When a skill is going to run on an agent that has access to real data, real credentials, and real systems, the skill should be vetted. The smaller catalog is a feature, not a limitation — it means every skill in it has been checked.
Many KiwiClaw users discover skills on ClawHub and then request them for the KiwiClaw marketplace. We actively encourage this. The more skills that go through vetting, the safer the entire ecosystem becomes.
How to Migrate Your Favorite Skills from ClawHub
If you use ClawHub skills that are not yet in the KiwiClaw marketplace, you can request them:
- Go to the KiwiClaw Skills Hub
- Click "Request a skill" and paste the ClawHub URL
- The skill enters the 6-pass vetting pipeline
- If it passes, it appears in the marketplace (typically within 24-48 hours)
- If it fails, you get a report explaining what was flagged
You can also request skills through the KiwiClaw MCP server — your agent can search ClawHub, find a skill, and submit a vetting request without leaving your workflow.
Frequently Asked Questions
Is ClawHub safe to use?
ClawHub is an open registry where anyone can publish skills. While most skills are legitimate, the ClawHavoc attack demonstrated that malicious skills can go undetected for weeks. Use ClawHub for discovery and testing, but vet skills carefully before production use — or use a vetted marketplace like KiwiClaw's.
Can I use ClawHub skills on KiwiClaw?
Yes. You can request any ClawHub skill to be added to the KiwiClaw marketplace. The skill goes through our 6-pass vetting pipeline before it becomes available. Most skills pass within 24-48 hours.
How long does KiwiClaw's vetting take?
Automated vetting (passes 1-5) takes 5-15 minutes. If a skill is flagged for human review (pass 6), the total time is typically 24-48 hours. Skills that pass all automated checks without flags are available almost immediately.
Does KiwiClaw block all malicious skills?
No security system is perfect. KiwiClaw's 6-pass pipeline catches known attack patterns, behavioral anomalies, and suspicious code. We also continuously monitor approved skills and can revoke them instantly if new threats emerge. The goal is defense in depth, not a false promise of perfection.
Browse the marketplace. See the full catalog of vetted skills at the KiwiClaw Skills Hub. Every skill has a security report showing exactly what the vetting pipeline found. For more on how the vetting works, read our deep dive on the skills vetting pipeline.
Related Reading
- OpenClaw Skills Vetting: How KiwiClaw Blocks Malicious Packages
- 1,184 Malicious Skills on ClawHub Explained
- How to Publish and Monetize an OpenClaw Skill
- KiwiClaw Skills Hub