ComfyUI > Nodes > ComfyUI-MultiGPU

ComfyUI Extension: ComfyUI-MultiGPU

Repo Name

ComfyUI-MultiGPU

Author
pollockjj (Account age: 3830 days)
Nodes
View all nodes(12)
Latest Updated
2025-04-17
Github Stars
0.26K

How to Install ComfyUI-MultiGPU

Install this extension via the ComfyUI Manager by searching for ComfyUI-MultiGPU
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-MultiGPU in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

ComfyUI-MultiGPU Description

ComfyUI-MultiGPU enhances ComfyUI by enabling CUDA device selection for loader nodes, allowing model components like UNet, Clip, or VAE to be assigned to specific GPUs. It supports multi-GPU workflows for SDXL, FLUX, LTXVideo, and Hunyuan Video.

ComfyUI-MultiGPU Introduction

ComfyUI-MultiGPU is an innovative extension designed to optimize the use of your computer's graphics processing units (GPUs) and central processing unit (CPU) when working with AI models. This extension is particularly beneficial for AI artists who work with complex models that require significant computational resources. By intelligently managing memory and distributing workloads across multiple GPUs or between a GPU and the CPU, ComfyUI-MultiGPU helps free up your primary GPU's VRAM (Video Random Access Memory). This allows you to maximize the available resources for the actual computation tasks that matter most, such as processing in the latent space of AI models.

How ComfyUI-MultiGPU Works

At its core, ComfyUI-MultiGPU enhances memory management rather than parallel processing. This means that while the steps in your workflow still execute one after the other, the extension allows different components of your models to be loaded onto different devices. For example, parts of a model can be offloaded to system RAM or a secondary GPU, freeing up your main GPU for more intensive tasks. This is particularly useful when working with large models that might otherwise exceed the VRAM capacity of a single GPU.

Imagine your computer as a kitchen where cooking (computation) happens. If your main GPU is the chef, ComfyUI-MultiGPU acts like a smart kitchen assistant, ensuring that the chef has enough space and resources to work efficiently by moving ingredients (model components) to different parts of the kitchen (other GPUs or RAM) as needed.

ComfyUI-MultiGPU Features

  1. DisTorch Virtual VRAM for UNet Loaders: This feature allows you to move UNet layers off your main GPU, automatically distributing them to RAM or other GPUs. You can control how much VRAM is used with a simple setting, making it easy to adjust based on your needs.
  2. CLIP Offloading: Offers two solutions for offloading CLIP models:
  • MultiGPU CLIP: Fully offloads CLIP models to the CPU or a secondary GPU, supporting various CLIP configurations.
  • DisTorch Virtual VRAM CLIP: Distributes layers for LLM-based CLIP models, keeping computation on the main GPU for faster processing.
  1. MultiGPU VAE: This feature allows you to move VAE processing to the CPU or a secondary GPU, providing flexibility in how you manage your resources. These features work together to ensure that your main GPU has as much VRAM available as possible for the most critical computation tasks.

ComfyUI-MultiGPU Models

ComfyUI-MultiGPU supports a variety of models, including GGUF-quantized models, which are optimized for reduced VRAM usage. This makes it possible to run complex models on systems with limited resources. The extension automatically creates MultiGPU versions of loader nodes, allowing you to specify which GPU to use for each model component.

What's New with ComfyUI-MultiGPU

The latest update, DisTorch 2.0, introduces a simplified Virtual VRAM control system. This new feature allows you to offload model layers from your GPU with minimal configuration. You simply set the amount of VRAM you want to free up, and DisTorch takes care of the rest. This update makes it easier than ever to manage your system's resources and run larger models efficiently.

Troubleshooting ComfyUI-MultiGPU

If you encounter issues while using ComfyUI-MultiGPU, here are some common problems and solutions:

  • Problem: The system crashes when using the extension.
  • Solution: Ensure that you are not overloading your system's resources. Try reducing the amount of VRAM you are attempting to free up or distribute the workload differently across your devices.
  • Problem: Models are not loading correctly.
  • Solution: Check that all required models and dependencies are installed correctly. Ensure that your system meets the necessary hardware requirements. For more detailed troubleshooting, consider visiting community forums or the extension's issue tracker on GitHub.

Learn More about ComfyUI-MultiGPU

To further explore the capabilities of ComfyUI-MultiGPU, you can access additional resources such as tutorials and community forums. These platforms provide valuable insights and support from other AI artists and developers who use the extension. Engaging with the community can help you discover new ways to optimize your workflows and make the most of your computational resources.

ComfyUI-MultiGPU Related Nodes

RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.