AI Video Generation
Create AI videos with Sora 2, Veo 3, Seedance, Runway, and modern APIs using reliable prompt and rendering workflows.
Create AI videos with Sora 2, Veo 3, Seedance, Runway, and modern APIs using reliable prompt and rendering workflows.
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
On first use, read
setup.md.
User needs to generate, edit, or scale AI videos with current models and APIs. Use this skill to choose the right current model stack, write stronger motion prompts, and run reliable async video pipelines.
User preferences persist in
~/video-generation/. See memory-template.md for setup.
~/video-generation/ ├── memory.md # Preferred providers, model routing, reusable shot recipes └── history.md # Optional run log for jobs, costs, and outputs
| Topic | File |
|---|---|
| Initial setup | |
| Memory template | |
| Migration guide | |
| Model snapshot | |
| Async API patterns | |
| OpenAI Sora 2 | |
| Google Veo 3.x | |
| Runway Gen-4 | |
| Luma Ray | |
| ByteDance Seedance | |
| Kling | |
| Vidu | |
| Pika via Fal | |
| MiniMax Hailuo | |
| Replicate routing | |
| Open-source local models | |
| Distribution playbook | |
Map community names to real API model IDs first. Examples:
sora-2, sora-2-pro, veo-3.0-generate-001, gen4_turbo, gen4_aleph.
| Task | First choice | Backup |
|---|---|---|
| Premium prompt-only generation | | |
| Fast drafts at lower cost | | |
| Long-form cinematic shots | | |
| Strong image-to-video control | | |
| Multi-shot narrative consistency | Seedance family | |
| Local privacy-first workflows | Wan2.2 / HunyuanVideo | CogVideoX |
Start with low duration and lower tier, validate motion and composition, then rerender winners with premium models or longer durations.
Always include subject, action, camera motion, lens style, lighting, and scene timing. For references and start/end frames, keep continuity constraints explicit.
Every provider pipeline must support queued jobs, polling/backoff, retries, cancellation, and signed-URL download before expiry.
If the preferred model is blocked or overloaded:
| Provider | Endpoint | Data Sent | Purpose |
|---|---|---|---|
| OpenAI | | Prompt text, optional input images/video refs | Sora 2 video generation |
| Google Vertex AI | | Prompt text, optional image input, generation params | Veo 3.x generation |
| Runway | | Prompt text, optional input media | Gen-4 generation and image-to-video |
| Luma | | Prompt text, optional keyframes/start-end images | Ray generation |
| Fal | | Prompt text, optional input media | Pika and Hailuo hosted APIs |
| Replicate | | Prompt text, optional input media | Multi-model routing and experimentation |
| Vidu | | Prompt text, optional start/end/reference images | Vidu text/image/reference video APIs |
| Tencent MPS | | Prompt text and generation parameters | Unified AIGC video task APIs |
No other data is sent externally.
Data that leaves your machine:
Data that stays local:
~/video-generation/memory.md~/video-generation/history.mdThis skill does NOT:
This skill can send prompts and media references to third-party AI providers. Only install if you trust those providers with your content.
Install with
clawhub install <slug> if user confirms:
image-generation - Build still concepts and keyframes before video generationimage-edit - Prepare clean references, masks, and style framesvideo-edit - Post-process generated clips and final exportsvideo-captions - Add subtitle and text overlay workflowsffmpeg - Compose, transcode, and package production outputsclawhub star video-generationclawhub syncNo automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.