Visit ComfyUI Online for ready-to-use ComfyUI environment
AnimateDiff integration for ComfyUI adapts from sd-webui-animatediff, requiring the download of either mm_sd_v14.ckpt or mm_sd_v15.ckpt. Place the model weights in the specified directory without altering the filenames.
The comfyui-animatediff
extension integrates the powerful AnimateDiff technology into ComfyUI, allowing AI artists to create stunning animations from text prompts or images. This extension adapts from the sd-webui-animatediff
and provides a seamless way to generate animated content without needing extensive technical knowledge. With comfyui-animatediff
, you can transform your creative ideas into dynamic animations, making it an invaluable tool for digital artists, animators, and content creators.
At its core, comfyui-animatediff
leverages motion modules to inject movement into static images generated by AI models. Think of it as adding a layer of animation on top of your AI-generated images. The process involves several key steps:
comfyui-animatediff
simplifies the creation of complex animations, making it accessible even to those with minimal technical background.This node loads the motion module required for generating animations. It is the first step in the animation workflow.
Similar to the KSampler
, this node handles the sampling of frames. Key settings include:
motion_module
: Specifies the motion module to use.frame_number
: Determines the length of the animation.latent_image
: Allows passing an EmptyLatentImage
for sampling.sliding_window_opts
: Customizes sliding window options for generating longer animations.This node combines the sampled frames into a final animation. Key settings include:
frame_rate
: Sets the number of frames per second.loop_count
: Determines how many times the animation should loop (use 0 for infinite loop).save_image
: Decides whether to save the animation to disk.format
: Supports various formats like GIF, WebP, WebM, and MP4.Customizes the sliding window feature, which is useful for generating long animations without frame length limits. Key settings include:
context_length
: Number of frames per window.context_stride
: Sampling strategy for frames.context_overlap
: Overlap between window slices.closed_loop
: Creates a closed-loop animation.This node allows loading existing GIFs or videos as input images, which can be useful for using them as ControlNet inputs.
The extension supports various models, each suited for different types of animations:
comfyui-animatediff/loras/
folder. Note that LoRAs only work with AnimateDiff v2. - New Node: AnimateDiffLoraLoader: This node loads the motion LoRAs for use in animations.This is a known issue with xformers
. The current workaround is to disable xformers
by adding --disable-xformers
when starting ComfyUI.
If your GIF is splitting into multiple scenes, try the following:
xformers
with --disable-xformers
.This issue is due to the training data containing Shutterstock watermarks. Try using other community-finetuned modules to avoid this problem.
For additional resources, tutorials, and community support, consider the following:
comfyui-animatediff
and enhance your animation projects.© Copyright 2024 RunComfy. All Rights Reserved.