ComfyUI  >  Workflows  >  Wan 2.1 Fun | Controlled Video Generation

Wan 2.1 Fun | Controlled Video Generation

Wan 2.1 Fun is a flexible AI video generation workflow built around the Wan 2.1 model family. It enables controlled video creation by extracting Depth, Canny, or OpenPose passes from an input video and applying them to Wan 2.1 Fun Control models. The workflow supports multi-resolution video prediction, trajectory control, and multilingual outputs. It also includes Wan 2.1 Fun InP models for prompt-based generation with start and end frame prediction. With 1.3B and 14B parameter variants, Wan 2.1 Fun provides scalable, high-quality outputs for both creative exploration and precision-guided animation.

ComfyUI Wan 2.1 Fun Workflow

Wan 2.1 Fun AI Video Generation with Depth, Canny, OpenPose Control
Want to run this workflow?
  • Fully operational workflows
  • No missing nodes or models
  • No manual setups required
  • Features stunning visuals

ComfyUI Wan 2.1 Fun Examples

ComfyUI Wan 2.1 Fun Description

Wan 2.1 Fun | Controlled Video Generation

Wan 2.1 Fun introduces an intuitive and powerful method for controlled AI video generation using Wan 2.1 Fun models. By extracting Depth, Canny, or OpenPose passes from an input video, Wan 2.1 Fun ComfyUI workflow allows users to influence the structure, motion, and style of the resulting output with precision. Instead of generating videos blindly from prompts, Wan 2.1 Fun brings structured visual data into the process—preserving motion accuracy, enhancing stylization, and enabling more deliberate transformations.

Whether you're building dynamic animations, pose-driven performances, or experimenting with visual abstraction, Wan 2.1 Fun puts artistic control directly into your hands while leveraging the expressive power of Wan 2.1 Fun models.

Why Use Wan 2.1 Fun?

The Wan 2.1 Fun workflow offers a flexible and intuitive way to guide AI video generation using structured visual references:

  • Use Depth, Canny, or OpenPose for precise video control
  • Achieve clearer structure, form, and motion in your outputs
  • No need for complex prompt engineering or training
  • Lightweight and fast processing with visually rich results
  • Great for action scenes, stylized choreography, or structured motion art

How to Use the Wan 2.1 Fun for Controlled Video Generation?

wan 2.1 fun

Wan 2.1 Fun Overview

  • Load WanFun Model (purple): Model Loader
  • Enter Prompts** (green): Positive and Negative Prompts
  • Upload Your Video and Resize (cyan blue): User Inputs – Reference Video and Resizing
  • Choose Control Video Preprocessor (orange): Switch Node to select between Depth, Canny, or OpenPose
  • Wan Fun Sampler + Save Video (pink): Video Sampler

Quick Start Steps:

  1. Select your Wan 2.1 Fun model
  2. Enter positive and negative prompts to guide generation
  3. Upload your input video
  4. Run the workflow by clicking Queue Prompt button
  5. Check the last node for the final output (also saved to the Outputs folder)

1 - Load WanFun Model

wan 2.1 fun

Choose the right model variant for your task:

  • Wan2.1-Fun-Control (1.3B / 14B): For guided video generation with Depth, Canny, OpenPose, and trajectory control
  • Wan2.1-Fun-InP (1.3B / 14B): For text-to-video with start and end frame prediction

Memory Tips:

  • use model_cpu_offload for faster generation with 1.3B
  • use sequential_cpu_offload to reduce GPU memory usage with 14B

2 - Enter Prompts

wan 2.1 fun

  • Positive Prompt:
    • drive the motion, detailing, and depth of your video restyle
    • using descriptive and artistic language can enhance your final output
  • Negative Prompt:
    • using longer negative prompts such as "Blurring, mutation, deformation, distortion, dark and solid, comics." can increase stability
    • adding words such as "quiet, solid" can increase dynamism

3 - Upload Your Video and Resize

wan 2.1 fun

Upload your source video to begin the generation. Make sure to resize it appropriately using a resolution that’s compatible with Wan 2.1 Fun.

4 - Choose Control Video Preprocessor

wan 2.1 fun

The passes in preprocessor node are:

  • Depth: generates a spatial depth map from the input
  • Canny: highlights edges and structure using Canny edge detection
  • OpenPose: detects joints and landmarks for pose-based control

5 - Wan Fun Sampler + Save Video

wan 2.1 fun

Wan 2.1 Fun Sampler is optimized with the best settings. You can experiment with different configurations if desired. Once rendering is complete, the stylized video will be saved.


Acknowledgement

The Wan 2.1 Fun workflow was developed by and , whose contributions to AI-based video generation have made advanced control techniques more accessible and efficient. This workflow leverages the capabilities of the Wan 2.1 Fun model alongside Depth, Canny, and OpenPose passes to provide dynamic and creative control over AI-generated videos. We appreciate their innovative work and thank them for sharing it with the community.

Want More ComfyUI Workflows?

RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.