Official codebase for "Causal Forcing: Autoregressive Diffusion Distillation Done Right for High-Quality Real-Time Interactive Video Generation"
-
Updated
Apr 6, 2026 - Python
Official codebase for "Causal Forcing: Autoregressive Diffusion Distillation Done Right for High-Quality Real-Time Interactive Video Generation"
AI-powered prompt generator for video (Wan2.1/2.2, Hunyuan), image (SD, FLUX, Midjourney, DALL-E), and creative content. Local LLMs with GPU auto-detection.
Mixed-precision quantization scheme (16/8/4bit mixed quantization) for the Wan2.2-Animate-14B model. Compresses the original 35GB base model to 17GB, balancing inference performance and model size.
An unofficial, high-performance clone of the Wan 2.2 T2V architecture built in pure Python. Supports ComfyUI custom nodes, Flow Matching sampling, Causal 3D VAE, and cloud inference via Veo 3.1 API. Runs on any GPU without Docker or root access.
Identity-preserving image-to-video generation: vision-grounded prompt simplification via Qwen3-VL, Lightning LoRA 4-step inference, and SAM3-masked DINOv3 candidate reranking for fluid 720p video from a single reference image.
Add a description, image, and links to the wan2 topic page so that developers can more easily learn about it.
To associate your repository with the wan2 topic, visit your repo's landing page and select "manage topics."