Visit ComfyUI Online for ready-to-use ComfyUI environment
ComfyUI-AnimateAnyone-Evolved enhances AnimateAnyone by using image sequences and reference images to create stylized videos. It aims for pose-to-video results at 1+ FPS on GPUs like RTX 3080 or better.
ComfyUI-AnimateAnyone-Evolved is an advanced extension designed to transform static images into dynamic, stylized videos. This tool leverages the power of AI to animate characters and scenes based on pose image sequences and reference images. It is particularly useful for AI artists looking to bring their creations to life with minimal effort. The extension aims to achieve a frame rate of 1+ FPS on GPUs equivalent to or better than the RTX 3080, making it accessible for users with high-end hardware.
ComfyUI-AnimateAnyone-Evolved works by taking a sequence of pose images and a reference image to generate a video that animates the reference image according to the poses. Think of it as a digital puppeteer: the pose images act as the strings that control the movements, while the reference image is the puppet. The extension uses various AI models and algorithms to ensure that the animation is smooth and visually appealing.
The extension supports various samplers and schedulers, each offering different performance and quality trade-offs:
DDIM:
24 frames, steps=20
, context_frames=24
; Takes 835.67 seconds on RTX 3080.
24 frames, steps=20
, context_frames=12
; Takes 425.65 seconds on RTX 3080.
DPM++ 2M Karras:
24 frames, steps=20
, context_frames=12
; Takes 407.48 seconds on RTX 3080.
LCM:
24 frames, steps=20
, context_frames=24
; Takes 606.56 seconds on RTX 3080.
Euler:
24 frames, steps=20
, context_frames=12
; Takes 450.66 seconds on RTX 3080.
Euler Ancestral
LMS
PNDM
You can add Lora models to enhance the animation quality. This feature allows for more detailed and customized animations.
The extension can handle long pose image sequences, tested up to 120+ frames on an RTX 3080 GPU. The main parameter affecting GPU usage is context_frames
, which does not correlate with the length of the pose image sequences.
The implementation is broken down into modules, making the workflow in ComfyUI closely resemble the original pipeline from the AnimateAnyone paper.
The extension supports various models, each suited for different tasks and performance levels:
pretrained_weights
folder as follows:./pretrained_weights/
|-- denoising_unet.pth
|-- motion_module.pth
|-- pose_guider.pth
|-- reference_unet.pth
`-- stable-diffusion-v1-5
|-- feature_extractor
| `-- preprocessor_config.json
|-- model_index.json
|-- unet
| |-- config.json
| `-- diffusion_pytorch_model.bin
`-- v1-inference.yaml
context_frames
parameter.pretrained_weights
folder.© Copyright 2024 RunComfy. All Rights Reserved.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.