//
Create AI videos on your own hardware
AI video generation is the frontier of generative AI. Models like Wan 2.1 and Mochi can create stunning videos locally, though they require powerful GPUs.
Different models excel at different tasks.
Video models:
• Wan 2.1 - Best open-source quality
• Mochi - Good for motion
• CogVideoX - Research quality
• AnimateDiff - Based on Stable DiffusionComfyUI is the best interface for video generation.
# Install ComfyUI first, then add video nodes:
git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite \
custom_nodes/ComfyUI-VideoHelperSuite
# Install Wan nodes for Wan 2.1 supportVideo models are large, plan accordingly.
Model sizes:
• Wan 2.1 14B: ~30GB
• Mochi: ~10GB
• AnimateDiff motion modules: ~2GB each
Place in appropriate ComfyUI foldersLoad a video workflow and set your parameters.
Typical settings:
• Resolution: 512x512 or 768x768
• Frames: 16-64
• FPS: 8-24
• Steps: 30-50💡 Start with short clips (16 frames) to test, then increase.
❓ Out of VRAM
✅ Video generation needs 16GB+ VRAM. Use lower resolution, fewer frames, or CPU offloading.
❓ Video is choppy
✅ Use interpolation to increase FPS. RIFE is a good open-source option.