ComfyUI > Nodes > AnimateDiff Evolved

ComfyUI Extension: AnimateDiff Evolved

Repo Name

ComfyUI-AnimateDiff-Evolved

Author
Kosinkadink (Account age: 3712 days)
Nodes
View all nodes(95)
Latest Updated
2024-08-16
Github Stars
2.49K

How to Install AnimateDiff Evolved

Install this extension via the ComfyUI Manager by searching for AnimateDiff Evolved
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter AnimateDiff Evolved in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

AnimateDiff Evolved Description

AnimateDiff Evolved enhances ComfyUI by integrating improved motion models from sd-webui-animatediff. Users can download and use original or finetuned models, placing them in the specified directory for seamless workflow sharing.

AnimateDiff Evolved Introduction

ComfyUI-AnimateDiff-Evolved is an advanced extension for ComfyUI that integrates the capabilities of AnimateDiff, a tool designed to generate animations from text-to-image diffusion models. This extension enhances the original AnimateDiff by adding evolved sampling options and advanced features, making it a powerful tool for AI artists looking to create dynamic and engaging animations. Whether you're looking to animate still images, create complex motion sequences, or experiment with new animation techniques, ComfyUI-AnimateDiff-Evolved provides the tools you need to bring your creative visions to life.

How AnimateDiff Evolved Works

At its core, ComfyUI-AnimateDiff-Evolved works by leveraging motion modules and advanced sampling techniques to generate animations from text prompts. The extension uses a combination of Stable Diffusion models and motion modules to create coherent and visually appealing animations. By integrating with ComfyUI, it allows users to easily manage and customize their animation workflows through a user-friendly interface.

The process involves several key steps:

  1. Text Prompt Input: Users provide text prompts that describe the desired animation.
  2. Model Selection: The extension uses pre-trained motion modules and Stable Diffusion models to interpret the text prompts.
  3. Animation Generation: The system generates a sequence of images that form the animation, applying motion and effects as specified by the user.
  4. Customization: Users can fine-tune the animation by adjusting various parameters, such as motion scale, effect strength, and context options.

AnimateDiff Evolved Features

Key Features

  • Compatibility with KSampler Nodes: Works seamlessly with vanilla and custom KSampler nodes.
  • ControlNet and IPAdapter Support: Integrates with ControlNet and IPAdapter for enhanced control over animation parameters.
  • Infinite Animation Length: Supports sliding context windows for creating animations of any length.
  • Advanced Sampling Options: Includes FreeInit and FreeNoise for improved sampling quality.
  • Motion LoRAs: Allows the use of Motion LoRAs to influence movement in animations.
  • Prompt Travel: Supports prompt travel using BatchPromptSchedule nodes.
  • Custom Noise Scheduling: Offers custom noise scheduling options for more control over the animation process.
  • Multiple Motion Models: Supports using multiple motion models simultaneously for complex animations.
  • HotshotXL and AnimateDiff-SDXL Support: Compatible with advanced motion modules like HotshotXL and AnimateDiff-SDXL.
  • AnimateLCM and PIA Support: Integrates with AnimateLCM and PIA for additional animation capabilities.
  • Keyframe Scheduling: Allows scheduling of motion parameters across different points in the animation.
  • Mac M1/M2/M3 Support: Fully compatible with Mac M1, M2, and M3 systems.

Customization Options

  • Scale and Effect Multival Inputs: Control the amount of motion and the influence of motion models using float values, lists, or masks.
  • Context and View Options: Extend animation lengths and manage VRAM usage with context and view options.
  • Sample Settings: Customize the sampling process with options like FreeNoise and FreeInit.
  • Iteration Options: Re-sample latents without chaining multiple KSamplers.
  • Noise Layers: Add, weight, or replace initial noise layers for more control over the animation's appearance.

AnimateDiff Evolved Models

ComfyUI-AnimateDiff-Evolved supports various motion models, each designed for different types of animations. Here are some of the available models:

Original Models

  • mm_sd_v14
  • mm_sd_v15
  • mm_sd_v15_v2
  • v3_sd15_mm

Stabilized Finetunes

  • mm-Stabilized_mid
  • mm-Stabilized_high

Higher Resolution Finetune

  • temporaldiff-v1-animatediff

FP16/Safetensor Versions

  • FP16 versions of vanilla motion models

Motion LoRAs

  • Motion LoRAs for v2-based motion models These models can be downloaded from sources like HuggingFace, Google Drive, and CivitAI. They can be placed in specific directories within the ComfyUI setup for easy access.

Troubleshooting AnimateDiff Evolved

Common Issues and Solutions

  1. Visible Watermarks: Some motion models may produce visible watermarks. Using different motion modules or combinations can help alleviate this issue.
  2. VRAM Usage: High VRAM usage can be managed by adjusting context and view options.
  3. Sampling Speed: If sampling is slow, consider reducing the number of iterations or using FreeNoise for faster results.

Frequently Asked Questions

  • How do I install ComfyUI-AnimateDiff-Evolved?
  • Follow the installation instructions provided in the ComfyUI Manager or clone the repository manually into the custom_nodes folder.
  • Can I use custom models with this extension?
  • Yes, you can use custom models by placing them in the appropriate directories and updating the extra_model_paths.yaml file if needed.
  • What are Motion LoRAs and how do I use them?
  • Motion LoRAs are specialized models that influence the movement in animations. They can be placed in designated directories and used with v2-based motion models.

Learn More about AnimateDiff Evolved

To learn more about ComfyUI-AnimateDiff-Evolved and explore its full potential, you can refer to the following resources:

AnimateDiff Evolved Related Nodes

RunComfy

Β© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.