Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates loading and integrating SUPIR model for enhanced AI art generation with SDXL model, optimizing performance.
The SUPIR_model_loader node is designed to facilitate the loading and integration of the SUPIR model within your AI art generation workflow. This node is essential for merging the SUPIR model with the SDXL model, ensuring that the combined capabilities of both models are leveraged for enhanced image generation. The node handles the loading of model weights, manages device allocation, and optimizes memory usage, making it a crucial component for efficient and high-quality AI art creation. By using this node, you can seamlessly incorporate advanced diffusion models into your projects, benefiting from improved performance and flexibility.
This parameter specifies the base model to be used. It is a required input and should be set to the model you intend to enhance with the SUPIR model.
This parameter represents the local CLIP model, which is used for text-to-image tasks. It is a required input and should be set to the appropriate CLIP model for your project.
This parameter represents the global CLIP model, which complements the local CLIP model in text-to-image tasks. It is a required input and should be set to the appropriate CLIP model for your project.
This parameter specifies the Variational Autoencoder (VAE) model to be used. It is a required input and should be set to the VAE model that matches your base model.
This parameter allows you to select the SUPIR model checkpoint file from a list of available checkpoints. It is a required input and should be set to the specific SUPIR model checkpoint you wish to load.
This boolean parameter determines whether the UNet weights should be cast to torch.float8_e4m3fn
. Setting this to True
can save a significant amount of VRAM but may slightly impact the quality of the generated images. The default value is False
.
This parameter specifies the data type for the diffusion process. Options include fp16
, bf16
, fp32
, and auto
. The default value is auto
, which automatically selects the most appropriate data type based on your hardware and model configuration.
This optional boolean parameter, when set to True
, uses the Accelerate library to load weights directly to the GPU, which can slightly speed up the model loading process. The default value is False
.
This output parameter represents the loaded and merged SUPIR model. It is the primary model that you will use for generating images, combining the strengths of both the SUPIR and SDXL models.
This output parameter represents the VAE model associated with the loaded SUPIR model. It is used in conjunction with the SUPIR model to enhance image generation quality.
supir_model
parameter is set to the correct checkpoint file to avoid loading errors.fp8_unet
parameter to save VRAM if you are working with limited resources, but be aware of the potential slight impact on image quality.diffusion_dtype
to auto
unless you encounter issues with model loading, in which case you can experiment with other data types like fp16
or bf16
.high_vram
option if you have a powerful GPU and want to speed up the model loading process.supir_model
parameter is set to the correct checkpoint file and ensure that the file is not corrupted.<missing_keys>
<unexpected_keys>
© Copyright 2024 RunComfy. All Rights Reserved.