Wan 2.1 Fun | Controlled Video Generation
Wan 2.1 Fun is a flexible AI video generation workflow built around the Wan 2.1 model family. It enables controlled video creation by extracting Depth, Canny, or OpenPose passes from an input video and applying them to Wan 2.1 Fun Control models. The workflow supports multi-resolution video prediction, trajectory control, and multilingual outputs. It also includes Wan 2.1 Fun InP models for prompt-based generation with start and end frame prediction. With 1.3B and 14B parameter variants, Wan 2.1 Fun provides scalable, high-quality outputs for both creative exploration and precision-guided animation.ComfyUI Wan 2.1 Fun Workflow

- Fully operational workflows
- No missing nodes or models
- No manual setups required
- Features stunning visuals
ComfyUI Wan 2.1 Fun Examples
ComfyUI Wan 2.1 Fun Description
Wan 2.1 Fun | Controlled Video Generation
Wan 2.1 Fun introduces an intuitive and powerful method for controlled AI video generation using Wan 2.1 Fun models. By extracting Depth, Canny, or OpenPose passes from an input video, Wan 2.1 Fun ComfyUI workflow allows users to influence the structure, motion, and style of the resulting output with precision. Instead of generating videos blindly from prompts, Wan 2.1 Fun brings structured visual data into the process—preserving motion accuracy, enhancing stylization, and enabling more deliberate transformations.
Whether you're building dynamic animations, pose-driven performances, or experimenting with visual abstraction, Wan 2.1 Fun puts artistic control directly into your hands while leveraging the expressive power of Wan 2.1 Fun models.
Why Use Wan 2.1 Fun?
The Wan 2.1 Fun workflow offers a flexible and intuitive way to guide AI video generation using structured visual references:
- Use Depth, Canny, or OpenPose for precise video control
- Achieve clearer structure, form, and motion in your outputs
- No need for complex prompt engineering or training
- Lightweight and fast processing with visually rich results
- Great for action scenes, stylized choreography, or structured motion art
How to Use the Wan 2.1 Fun for Controlled Video Generation?
Wan 2.1 Fun Overview
Load WanFun Model
(purple): Model LoaderEnter Prompts
** (green): Positive and Negative PromptsUpload Your Video and Resize
(cyan blue): User Inputs – Reference Video and ResizingChoose Control Video Preprocessor
(orange): Switch Node to select between Depth, Canny, or OpenPoseWan Fun Sampler + Save Video
(pink): Video Sampler
Quick Start Steps:
- Select your
Wan 2.1 Fun
model - Enter positive and negative prompts to guide generation
- Upload your input video
- Run the workflow by clicking
Queue Prompt
button - Check the last node for the final output (also saved to the
Outputs
folder)
1 - Load WanFun Model
Choose the right model variant for your task:
Wan2.1-Fun-Control (1.3B / 14B)
: For guided video generation with Depth, Canny, OpenPose, and trajectory controlWan2.1-Fun-InP (1.3B / 14B)
: For text-to-video with start and end frame prediction
Memory Tips:
- use
model_cpu_offload
for faster generation with 1.3B - use
sequential_cpu_offload
to reduce GPU memory usage with 14B
2 - Enter Prompts
- Positive Prompt:
- drive the motion, detailing, and depth of your video restyle
- using descriptive and artistic language can enhance your final output
- Negative Prompt:
- using longer negative prompts such as "Blurring, mutation, deformation, distortion, dark and solid, comics." can increase stability
- adding words such as "quiet, solid" can increase dynamism
3 - Upload Your Video and Resize
Upload your source video to begin the generation. Make sure to resize it appropriately using a resolution that’s compatible with Wan 2.1 Fun.
4 - Choose Control Video Preprocessor
The passes in preprocessor node are:
Depth
: generates a spatial depth map from the inputCanny
: highlights edges and structure using Canny edge detectionOpenPose
: detects joints and landmarks for pose-based control
5 - Wan Fun Sampler + Save Video
Wan 2.1 Fun Sampler
is optimized with the best settings. You can experiment with different configurations if desired.
Once rendering is complete, the stylized video will be saved.
Acknowledgement
The Wan 2.1 Fun workflow was developed by and , whose contributions to AI-based video generation have made advanced control techniques more accessible and efficient. This workflow leverages the capabilities of the Wan 2.1 Fun model alongside Depth, Canny, and OpenPose passes to provide dynamic and creative control over AI-generated videos. We appreciate their innovative work and thank them for sharing it with the community.