The State of AI Skills: Mid-2026
Mid-year review of the AI skills ecosystem. Registry growth, platform wars, enterprise adoption, and the emerging patterns that will define the second half of 2026.
Six months ago, the AI skills ecosystem was promising but immature. Registries were growing. Enterprises were experimenting. Standards were debated. The question was whether AI skills would follow the trajectory of npm (explosive growth, universal adoption) or the trajectory of browser extensions (useful but niche).
At the midpoint of 2026, the answer is clear: AI skills are on the npm trajectory. The numbers tell the story, but the patterns behind the numbers tell a richer one.
Key Takeaways
- 25,000+ skills across all registries, up from approximately 14,000 at the start of the year
- Enterprise adoption doubled, with 40% of Fortune 500 companies deploying custom AI skills internally
- Three dominant platforms (OpenClaw, Claude Code, Cursor) account for 85% of skill installs
- Quality, not quantity emerged as the primary ecosystem challenge as skill discovery becomes harder
- Monetization models are being tested, with early results showing subscription beats per-skill pricing
The Numbers
Registry Growth
Total skills across all major registries:
| Registry | Skills (Jan 2026) | Skills (Jul 2026) | Growth |
|---|---|---|---|
| OpenClaw/ClawHub | 13,700 | 18,500 | +35% |
| Claude Code Skills | 2,100 | 4,200 | +100% |
| Cursor Extensions | 1,800 | 3,100 | +72% |
| Independent | ~500 | ~1,200 | +140% |
The growth rate is accelerating, not decelerating. The second quarter of 2026 saw more new skills published than the entire first half of 2025.
Install Patterns
The median skill has 47 installs. The mean has 2,300 installs. This extreme skew means a small number of popular skills account for the vast majority of usage. The top 100 skills (0.4% of the total) account for approximately 60% of all installs.
This distribution is typical of software ecosystems and suggests the market is past the "everything is new and interesting" phase into the "quality floats to the top" phase.
Category Distribution
Code quality and review skills remain the largest category at 28% of all skills. Development workflow skills are second at 22%. The fastest-growing categories are:
- Security analysis: +180% year-over-year
- Documentation generation: +150% year-over-year
- Data analysis: +120% year-over-year
The growth in security skills aligns with enterprise adoption, where security compliance is often the gate for AI tool deployment. See Security Research Skills for Claude for details on this category.
Three Patterns Defining 2026
Pattern 1: The Skill Stack
Individual skills are powerful. Composed skills are transformative. The emerging pattern is the "skill stack": a curated set of skills that work together to handle a complete workflow.
A development skill stack might include:
- Code review skill (quality analysis)
- Security scanning skill (vulnerability detection)
- Documentation generation skill (docs from code)
- Release automation skill (changelog and versioning)
- Test generation skill (test cases from implementation)
Each skill handles one concern. Together, they cover the entire development lifecycle from code to release. The stack is more than the sum of its parts because each skill's output feeds the next skill's input.
Teams are increasingly publishing and sharing skill stacks rather than individual skills. A "Python Backend Stack" or a "React Frontend Stack" provides a complete toolkit rather than requiring users to discover and compose skills themselves.
This composability trend validates the predictions made in the AI Skills Market 2026 predictions about skill ecosystems becoming platforms in their own right.
Pattern 2: Enterprise Skill Governance
Enterprise adoption brought enterprise requirements. The most significant is governance: who can install skills, which skills are approved, how skills are audited, and how skill access is controlled.
Large organizations now maintain internal skill registries that act as curated subsets of public registries. A skill enters the internal registry only after security review, compliance evaluation, and performance testing. Developers install from the internal registry, not directly from public ones.
This governance layer creates a new role: the "skill administrator" who evaluates, approves, and maintains the organization's skill portfolio. It's analogous to the role of DevOps engineers who manage tool chains, but for AI capability management.
The governance trend has implications for skill creators. Skills that include compliance documentation, security audit results, and performance benchmarks get approved faster. Skills without this metadata sit in approval queues indefinitely.
The enterprise skills landscape article covered this trend's early signals. Six months later, it's standard practice at large organizations.
Pattern 3: Creator Monetization
The first wave of skill monetization experiments have produced data. Several models have been tested:
Per-skill purchase. Pay once, use forever. Results: low conversion rates (under 2%), high support burden (buyers expect ongoing maintenance), and pricing pressure (most users expect skills to be free).
Subscription bundles. Monthly fee for access to a curated skill library. Results: higher conversion (5-8%), more predictable revenue, and lower support burden (the bundle maintainer handles updates).
Enterprise licensing. Organization-wide licenses for skill stacks. Results: the highest revenue per creator but the longest sales cycle and highest support expectations.
Freemium. Free basic skill with paid premium features. Results: the best user acquisition but requires skills with clear tiers of capability.
Early data suggests subscription bundles will dominate for individual creators, while enterprise licensing will dominate for professional skill publishers. The skill monetization stack article provides a deeper analysis of these models.
Challenges
Discovery
With 25,000+ skills, finding the right one is increasingly difficult. Search and categorization help, but the fundamental problem is that skill quality varies enormously. A search for "code review" returns dozens of results, and the user has no reliable way to distinguish the excellent ones from the mediocre ones.
Quality signals (install counts, ratings, verification badges) help but are imperfect. Install counts favor first movers. Ratings are sparse (most skills have fewer than 10 ratings). Verification badges confirm identity but not quality.
The marketplace that solves discovery will win the ecosystem. Right now, word-of-mouth and community recommendations remain the most reliable discovery channels. This is a structural opportunity for platforms that invest in better curation and recommendation.
Fragmentation
Three major platforms with incompatible skill formats create fragmentation. A skill built for OpenClaw doesn't work in Claude Code. A Cursor extension doesn't work in either. Developers building for maximum reach must publish multiple versions of the same skill.
Cross-platform skill standards have been proposed but not adopted. The platforms have incentives to maintain incompatibility (it locks users into their ecosystem), and no external body has sufficient authority to impose a standard.
The likely resolution is platform consolidation or de facto standardization through market share. If one platform captures 60%+ of the market, its format becomes the standard regardless of whether it's officially standardized.
Quality Control
Public registries accept almost any submission. This openness enables rapid growth but also enables low-quality, malicious, and misleading skills. The ecosystem has already seen:
- Skills that claim capabilities they don't deliver
- Skills that collect usage data without disclosure
- Skills that contain instructions harmful to the user's codebase
- Duplicate skills that repackage others' work without attribution
Registry operators are responding with review processes, automated quality checks, and community reporting. But the challenge scales with the registry: reviewing 100 new skills per day requires significant investment.
What to Watch in H2 2026
Platform consolidation. Will the three-platform landscape stabilize or will one platform pull away? Watch install growth rates, not skill counts.
Standard emergence. Will a cross-platform skill format gain traction? Watch for announcements from the major platforms about interoperability.
Enterprise acceleration. Will enterprise adoption rate continue doubling? Watch for enterprise-focused features from platform providers (SSO, audit logging, compliance tools).
Creator economics. Will skill creation become a viable income source for independent developers? Watch for revenue reports from top creators and pricing experiments from platforms.
AI-built skills. Will AI agents start creating skills for other AI agents? The meta-skill (a skill that creates skills) is technically possible and would accelerate ecosystem growth dramatically.
The second half of 2026 will determine whether AI skills are a feature of AI coding tools or a foundational layer of software development. The trajectory suggests the latter.
FAQ
Is the AI skills market saturated?
No. While 25,000 skills sounds like a lot, most categories have significant gaps. Niche domains (specific frameworks, specific industries, specific workflows) remain underserved. The saturation is in broad categories like "general code review." The opportunity is in specialized skills.
Which platform should I build skills for?
Build for the platform where your target users work. If unsure, OpenClaw has the largest user base. Claude Code has the fastest growth. Cursor has the strongest enterprise presence. Building for multiple platforms is feasible but requires maintaining separate versions.
Can I make a living creating AI skills?
A handful of creators earn significant income from skills. Most earn little or nothing. The path to income is through enterprise licensing or premium skill stacks, not individual free skills. Building a following first (see Growing Your AI Developer Following) increases monetization potential.
How do I keep my skills relevant as platforms evolve?
Track platform changelogs, test on betas, and update promptly when breaking changes occur. Skills that stop working after a platform update lose users permanently. The building with bleeding-edge toolchains article covers strategies for staying current.
Sources
- OpenClaw Registry Statistics
- Claude Code Documentation
- State of Developer Ecosystem 2026 - JetBrains
- AI Developer Tools Market Report - Gartner
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.