Visit ComfyUI Online for ready-to-use ComfyUI environment
Optimize GPU memory usage by clearing VRAM cache for smooth performance with large models and datasets.
The LayerUtility: PurgeVRAM
node is designed to help you manage and optimize the memory usage of your GPU by purging the Video RAM (VRAM). This node is particularly useful when working with large models or datasets that can quickly consume available VRAM, leading to performance issues or crashes. By using this node, you can clear the VRAM cache and unload models that are no longer needed, ensuring that your system runs smoothly and efficiently. This is especially beneficial for AI artists who work with resource-intensive applications and need to maintain optimal performance without delving into technical details.
This parameter accepts any type of input and serves as a placeholder to trigger the VRAM purge process. It does not affect the execution or results of the node but is required to initiate the function.
This boolean parameter determines whether the VRAM cache should be purged. When set to True
, the node will clear the VRAM cache, freeing up memory that was previously used by cached data. This can help improve performance by making more VRAM available for new tasks. The default value is True
.
This boolean parameter controls whether all loaded models should be unloaded from VRAM. When set to True
, the node will unload all models, freeing up the memory they occupied. This is useful when you need to load new models or datasets and want to ensure that there is enough available VRAM. The default value is True
.
This node does not produce any output parameters. Its primary function is to manage and optimize VRAM usage by purging the cache and unloading models, rather than generating data or results.
purge_cache
parameter set to True
when you notice that your system is slowing down due to high VRAM usage. This will clear the cache and free up memory for new tasks.purge_models
parameter to True
before loading new models or datasets to ensure that there is enough available VRAM. This can prevent crashes and improve performance.LayerUtility: PurgeVRAM
node with purge_cache
and purge_models
set to True
to free up VRAM before running your task again.LayerUtility: PurgeVRAM
node. Additionally, consider reducing the size of the models or datasets you are working with.© Copyright 2024 RunComfy. All Rights Reserved.