Cross-Platform Skills That Work
Building AI skills that work across Claude Code, Codex, and Gemini. Practical patterns for portable skill design that isn't locked to a single platform.
The AI coding assistant landscape in 2026 has three major platforms with meaningful market share: Claude Code, OpenAI Codex, and Google Gemini Code Assist. Each has its own skill format, its own capabilities, and its own community. Developers who build skills for only one platform limit their audience to a fraction of the market.
Cross-platform skills -- skills designed to work across multiple AI platforms with minimal adaptation -- are emerging as the practical approach for skill builders who want reach. The patterns for building portable skills aren't complicated, but they require deliberate design decisions from the start.
Key Takeaways
- Skills locked to one platform reach 30-40% of the market at most -- cross-platform skills reach the full developer audience
- The core skill content (instructions, methodology, domain knowledge) is 80% portable across platforms with no changes needed
- Platform-specific elements (tool calls, file access patterns, context management) require adaptation layers that can be abstracted cleanly
- Starting with a platform-agnostic core and adding platform adapters is easier than retrofitting portability onto a platform-specific skill
- The skill format convergence is happening but isn't complete -- investing in portability now positions skill builders for the unified format when it arrives
The Platform Landscape
Claude Code Skills
Claude Code uses markdown-based skill files (SKILL.md) with YAML frontmatter. Skills are stored in .claude/skills/ directories and loaded automatically based on trigger conditions in the skill description. Key features:
- Natural language instructions in markdown
- Tool access through MCP (Model Context Protocol) servers
- Context management through CLAUDE.md files
- Strong emphasis on file system access and terminal operations
For deep coverage of Claude Code skill design, see our complete guide to Claude Code skills.
OpenAI Codex Skills
Codex uses a JSON-based configuration format with natural language instructions embedded in prompt fields. Skills integrate through the API's tool calling interface:
- JSON configuration with prompt templates
- Tool calling through function definitions
- Plugin system for external integrations
- Focus on code generation and analysis
Gemini Code Assist
Gemini uses YAML-based configuration with instruction blocks. Skills integrate through Google's AI Studio and Vertex AI platforms:
- YAML configuration with structured instructions
- Tool integration through Google Cloud functions
- Workspace integration for enterprise environments
- Strong multimodal capabilities (code + documentation + diagrams)
The Portable Core
Despite format differences, 80% of a skill's value comes from content that's inherently platform-agnostic:
Domain Knowledge
The expertise encoded in a skill -- debugging methodologies, code review criteria, testing strategies, architecture patterns -- is universal. A debugging skill that teaches systematic root cause analysis works regardless of which AI platform executes it.
This knowledge is expressed in natural language, which every platform understands. The methodology for debugging complex bugs doesn't change because the AI platform changed.
Workflow Descriptions
Step-by-step procedures -- "first analyze the error, then identify affected files, then propose a fix, then verify the fix" -- work on every platform. The workflow is platform-agnostic; only the mechanics of executing each step differ.
Quality Criteria
Standards for what constitutes good output -- "code must include error handling," "tests must cover edge cases," "documentation must include examples" -- apply regardless of platform.
Decision Frameworks
When to use one approach over another, how to evaluate tradeoffs, which patterns to prefer in which contexts -- these judgment frameworks are the most valuable part of any skill and are completely portable.
The Platform-Specific Layer
The remaining 20% of a skill involves platform-specific mechanics:
File Access Patterns
Each platform has different conventions for reading and writing files:
# Platform-agnostic instruction
Read the project's configuration file to determine the framework version.
# Claude Code specific
Read the package.json file using the Read tool.
# Codex specific
Use the file_read function to access package.json.
# Gemini specific
Access the package.json file from the workspace context.
Tool Integration
Each platform exposes tools differently:
# Platform-agnostic instruction
Run the test suite and analyze the results.
# Claude Code specific
Use the Bash tool to execute `npm test` and read the output.
# Codex specific
Call the execute_command function with command "npm test".
# Gemini specific
Use the terminal tool to run the test command.
Context Management
How skills access and manage context varies:
# Platform-agnostic instruction
Remember the project's architecture for future interactions.
# Claude Code specific
This information is persisted in the CLAUDE.md context file.
# Codex specific
Store in the conversation context for reference.
# Gemini specific
Add to the workspace knowledge base.
The Adapter Pattern
The cleanest architecture for cross-platform skills separates the portable core from platform-specific adapters:
skill/
├── core/
│ ├── methodology.md # Platform-agnostic expertise
│ ├── criteria.md # Quality standards
│ └── workflows.md # Step-by-step procedures
├── adapters/
│ ├── claude-code/
│ │ └── SKILL.md # Claude Code format + tool references
│ ├── codex/
│ │ └── skill.json # Codex format + function definitions
│ └── gemini/
│ └── skill.yaml # Gemini format + workspace integration
└── README.md # Usage instructions for all platforms
The core directory contains the universal knowledge. Each adapter directory wraps that knowledge in the platform-specific format, adding tool references, context management instructions, and format-specific metadata.
Writing the Core
Write the core content in plain markdown without any platform-specific references:
# Code Review Methodology
## Review Procedure
1. **Understand the change.** Read the diff to understand what's being modified and why.
2. **Check correctness.** Verify the logic handles all cases including:
- Normal input
- Empty/null input
- Boundary values
- Error conditions
3. **Evaluate design.** Assess whether the approach is:
- Consistent with existing patterns
- Appropriately abstracted
- Maintainable long-term
4. **Verify testing.** Confirm tests cover:
- Happy path
- Error paths
- Edge cases identified in step 2
5. **Report findings.** Provide specific, actionable feedback with:
- The issue location (file and line)
- Why it's a problem
- A suggested fix
This methodology works identically on Claude Code, Codex, or Gemini. The AI reads the instructions and follows the procedure regardless of platform.
Writing Adapters
Each adapter wraps the core content in the platform's format:
# Claude Code Adapter (SKILL.md)
---
description: "Code review skill. Use when reviewing pull requests or code changes."
---
[Include core/methodology.md content here]
## Platform Integration
When executing this review:
- Use the Bash tool to run `git diff` for the changes
- Use the Read tool to read full file context around changes
- Use the Grep tool to find related code patterns
- Present findings using markdown formatting
// Codex Adapter (skill.json)
{
"name": "code-review",
"description": "Code review skill for pull requests",
"instructions": "[Include core/methodology.md content]",
"tools": [
{ "type": "function", "function": { "name": "file_read" }},
{ "type": "function", "function": { "name": "execute_command" }}
]
}
Testing Cross-Platform Skills
Behavioral Testing
Test that the skill produces equivalent results across platforms for the same input. Give each platform the same code to review, the same bug to debug, or the same task to complete, and compare the outputs.
The outputs won't be identical -- different platforms have different strengths -- but the methodology should be consistent. If the Claude Code version catches a bug that the Codex version misses, the core methodology might need strengthening, or the Codex adapter might be missing relevant context.
Format Validation
Each platform has format requirements. Validate that adapters meet their platform's specifications:
- Claude Code: YAML frontmatter is valid, description field is present
- Codex: JSON schema validates, required fields are populated
- Gemini: YAML structure matches expected format
Integration Testing
Test that tool references in each adapter actually work. A Claude Code adapter that references the "Bash tool" is correct. One that references "execute_command" (a Codex concept) will fail at runtime.
The Convergence Question
The AI skill format landscape is converging. The OpenClaw skill format, Claude Code's SKILL.md format, and emerging standards from other platforms share more similarities than differences. Industry observers expect a unified or at least highly interoperable format within 18 months.
For skill builders, this convergence means:
Investing in the portable core is always worthwhile. The domain knowledge and methodology in your skills will survive any format change.
Adapter complexity will decrease. As formats converge, the platform-specific adaptation required will shrink. Skills built with the adapter pattern will benefit most because the adapters become thinner while the core stays the same.
Cross-platform skills will become the norm. When format differences are minimal, there's no reason to build platform-specific skills. The investment in portability today positions you ahead of this transition.
For the latest on format convergence, see our analysis of the unified skill layer.
Distribution Across Platforms
Publishing cross-platform skills requires submitting to multiple registries:
- Claude Code: Publish to aiskill.market or distribute through GitHub
- OpenClaw/ClawHub: Publish to the ClawHub registry
- Community registries: Submit to platform-specific marketplaces
Each registry has its own submission process, quality requirements, and metadata format. Automate this process by generating platform-specific packages from your skill's core content and adapter configurations.
FAQ
Is it worth building cross-platform skills if I only use one platform?
Yes. Cross-platform design forces cleaner separation between methodology and mechanics, which improves the skill even on a single platform. It also future-proofs your investment if you switch platforms later.
Which platform should I develop for first?
Develop the platform-agnostic core first. Then build the adapter for whichever platform you use daily. Add other adapters when there's demand.
How different are the platforms in practice?
For text-based skills (methodology, criteria, procedures), the differences are minimal -- mostly format wrapper changes. For tool-heavy skills (file manipulation, terminal operations, API integration), the differences are more significant and require platform-specific code.
Can I use AI to generate adapters?
Yes. Given a well-structured core and one existing adapter as a reference, AI can generate adapters for other platforms with high accuracy. This is a natural application of the skill's own cross-platform knowledge.
How do I handle platform-specific features?
Design skills that degrade gracefully when a platform feature isn't available. If your skill uses Claude Code's Bash tool for terminal access but Gemini doesn't have an equivalent, the skill should include a fallback that works without terminal access.
Sources
- Model Context Protocol Specification - Protocol enabling cross-platform tool integration
- OpenAI Platform Documentation - Codex skill and tool integration reference
- Google AI Studio Documentation - Gemini skill development resources
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.