Visit ComfyUI Online for ready-to-use ComfyUI environment
Optimize AI models by pruning components, adjusting precisions, and enhancing efficiency for improved performance and reduced size.
The Model Pruner (mtb) node is designed to optimize and streamline your AI models by pruning unnecessary components and converting model precisions. This node helps in reducing the model size and improving performance by removing redundant parts and adjusting the precision of tensors. It supports operations on different parts of the model, including UNet, CLIP, and VAE, and can handle various precision formats such as FP8, FP16, BF16, and FP32. The main goal of this node is to enhance the efficiency of your models, making them faster and more resource-efficient without compromising their performance.
This parameter determines whether the pruned model components should be saved separately. If set to True
, each part of the model (UNet, CLIP, VAE) will be saved in its own file. This can be useful for modularity and easier management of model components. The default value is False
.
Specifies the directory where the pruned model files will be saved. This should be a valid path on your filesystem where you have write permissions. The default value is an empty string, which means the current working directory will be used.
A boolean parameter that, when set to True
, applies fixes to the CLIP component of the model. This is useful for correcting known issues with CLIP models. The default value is False
.
This parameter indicates whether to remove junk data from the model. Junk data can include unnecessary keys or values that do not contribute to the model's performance. Setting this to True
helps in further reducing the model size. The default value is False
.
Specifies the mode for handling Exponential Moving Average (EMA) in the model. This can be set to different modes depending on how you want to manage EMA weights. The default value is an empty string.
Defines the precision format for the UNet component. Supported values include FP8
, FP16
, BF16
, and FP32
. This parameter helps in converting the UNet tensors to the specified precision, optimizing memory usage and computational efficiency.
Defines the precision format for the CLIP component. Supported values include FP8
, FP16
, BF16
, and FP32
. This parameter helps in converting the CLIP tensors to the specified precision, optimizing memory usage and computational efficiency.
Defines the precision format for the VAE component. Supported values include FP8
, FP16
, BF16
, and FP32
. This parameter helps in converting the VAE tensors to the specified precision, optimizing memory usage and computational efficiency.
Specifies the operation to be performed on the UNet component. Possible values are CONVERT
, COPY
, and DELETE
. This parameter determines whether to convert the precision, copy as is, or delete the UNet component.
Specifies the operation to be performed on the CLIP component. Possible values are CONVERT
, COPY
, and DELETE
. This parameter determines whether to convert the precision, copy as is, or delete the CLIP component.
Specifies the operation to be performed on the VAE component. Possible values are CONVERT
, COPY
, and DELETE
. This parameter determines whether to convert the precision, copy as is, or delete the VAE component.
A dictionary containing the UNet model tensors. This parameter is optional and can be None
if the UNet component is not being pruned or modified.
A dictionary containing the CLIP model tensors. This parameter is optional and can be None
if the CLIP component is not being pruned or modified.
A dictionary containing the VAE model tensors. This parameter is optional and can be None
if the VAE component is not being pruned or modified.
This node does not produce any direct output parameters. The results of the pruning and conversion operations are saved to the specified directory.
save_folder
parameter is set to a valid directory where you have write permissions to avoid any file saving errors.fix_clip
parameter if you are aware of specific issues with your CLIP model that need correction.precision_unet
, precision_clip
, and precision_vae
parameters to optimize the model's memory usage and performance based on your hardware capabilities.remove_junk
parameter to further reduce the model size by eliminating unnecessary data.{tensor.dtype}
to fp8{k}
© Copyright 2024 RunComfy. All Rights Reserved.