Visit ComfyUI Online for ready-to-use ComfyUI environment
Load UNet model with dynamic offloading for efficient memory management, ideal for AI artists with large models on limited GPU memory.
The MZ_Flux1UnetLoader_cpuDynOffload
node is designed to load a UNet model with dynamic offloading capabilities, allowing for efficient memory management during model execution. This node is particularly useful for AI artists working with large models on systems with limited GPU memory. By dynamically offloading parts of the model to the CPU, it ensures that the GPU memory is utilized optimally, preventing out-of-memory errors and enabling the handling of more complex tasks. The node leverages the Flux1PartialLoad_Patch
method to manage the loading and offloading of model blocks, ensuring smooth and efficient model execution.
This parameter specifies the number of double blocks to be loaded onto the GPU at a time. Double blocks are larger and more memory-intensive, so controlling their size helps manage GPU memory usage. The value can range from 0 to 16, with a default of 7. Setting this value appropriately can help balance the load between the CPU and GPU, ensuring efficient execution without running into memory issues.
This parameter determines the number of single blocks to be loaded onto the GPU at a time. Single blocks are smaller and less memory-intensive compared to double blocks. The value can range from 0 to 37, with a default of 7. Adjusting this parameter helps in fine-tuning the memory management, allowing for smoother execution of the model by offloading less critical parts to the CPU when necessary.
The output is the loaded UNet model with dynamic offloading capabilities. This model is optimized for efficient memory usage, with parts of it being dynamically offloaded to the CPU as needed. This allows for handling larger models and more complex tasks without running into GPU memory limitations.
double_blocks_cuda_size
and single_blocks_cuda_size
parameters based on your system's GPU memory capacity. Lower values can help prevent out-of-memory errors on systems with limited GPU memory.ComfyUI_bitsandbytes_NF4
package is not installed.ComfyUI_bitsandbytes_NF4
package from the provided GitHub repository to resolve this issue.double_blocks_cuda_size
and single_blocks_cuda_size
to offload more parts of the model to the CPU, thereby freeing up GPU memory.© Copyright 2024 RunComfy. All Rights Reserved.