Visit ComfyUI Online for ready-to-use ComfyUI environment
Monitor and manage Video RAM usage for AI models and image processing tasks, optimizing system performance.
The VRAM_Debug
node is designed to help you monitor and manage the Video RAM (VRAM) usage of your system, particularly when working with AI models and image processing tasks. This node provides a detailed report on the amount of free VRAM before and after executing specific memory management actions, such as garbage collection, emptying the cache, and unloading all models. By using this node, you can optimize your system's performance, prevent out-of-memory errors, and ensure that your AI models run smoothly. The primary goal of the VRAM_Debug
node is to give you insights into your system's memory usage and help you make informed decisions about memory management.
This parameter determines whether to perform garbage collection to free up memory. Garbage collection helps in reclaiming memory that is no longer in use by the program. Setting this parameter to True
will trigger garbage collection, which can help in freeing up additional VRAM. The default value is False
.
This parameter controls whether to empty the cache to free up memory. Emptying the cache can help in releasing memory that is being held by cached data, which is no longer needed. Setting this parameter to True
will clear the cache, potentially freeing up significant VRAM. The default value is False
.
This parameter specifies whether to unload all currently loaded models to free up memory. Unloading models can release a substantial amount of VRAM, especially if you are working with large AI models. Setting this parameter to True
will unload all models, making more VRAM available for other tasks. The default value is False
.
This optional parameter allows you to pass an image through the node. It is primarily used for maintaining the flow of data in a node-based system. If not provided, it defaults to None
.
This optional parameter allows you to pass a model through the node. Similar to image_pass
, it helps in maintaining the flow of data. If not provided, it defaults to None
.
This optional parameter can be used to pass any additional input through the node. It is a flexible parameter that can handle various types of data. If not provided, it defaults to None
.
This output parameter provides a user interface element that displays the amount of free VRAM before and after the memory management actions. It is presented as a text string in the format "<freemem_before>x<freemem_after>"
, which helps you quickly understand the impact of the actions taken.
This output parameter returns a tuple containing the original inputs (any_input
, image_pass
, model_pass
) along with the amount of free VRAM before and after the memory management actions. The tuple is structured as (any_input, image_pass, model_pass, freemem_before, freemem_after)
, providing a comprehensive overview of the memory usage and the effectiveness of the actions taken.
gc_collect
parameter to trigger garbage collection when you notice that your system's memory usage is high, and you want to free up additional VRAM.empty_cache
parameter to True
when you need to clear cached data that is no longer needed, especially after running intensive tasks that may have filled up the cache.unload_all_models
parameter to unload all models when you are done with your current tasks and want to free up a significant amount of VRAM for other processes.gc_collect
, empty_cache
, and unload_all_models
parameters. You can also try reducing the batch size or the resolution of the images being processed to lower memory usage.© Copyright 2024 RunComfy. All Rights Reserved.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.