ComfyUI > Nodes > ComfyUI-SUPIR > SUPIR Model Loader (v2) (Clip)

ComfyUI Node: SUPIR Model Loader (v2) (Clip)

Class Name

SUPIR_model_loader_v2_clip

Category
SUPIR
Author
kijai (Account age: 2181days)
Extension
ComfyUI-SUPIR
Latest Updated
2024-05-21
Github Stars
1.17K

How to Install ComfyUI-SUPIR

Install this extension via the ComfyUI Manager by searching for ComfyUI-SUPIR
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-SUPIR in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

SUPIR Model Loader (v2) (Clip) Description

Facilitates loading and initializing SUPIR and CLIP models for AI artists, ensuring seamless integration and optimal performance.

SUPIR Model Loader (v2) (Clip):

The SUPIR_model_loader_v2_clip node is designed to facilitate the loading and initialization of the SUPIR model along with two CLIP models from SDXL checkpoints. This node is essential for AI artists who want to leverage the power of the SUPIR model in their creative workflows. It ensures that the necessary models are correctly loaded and configured, allowing for seamless integration and optimal performance. The node handles the intricate process of loading state dictionaries, replacing prefixes, and setting the appropriate data types, making it easier for you to focus on your artistic endeavors without worrying about the technical complexities.

SUPIR Model Loader (v2) (Clip) Input Parameters:

SUPIR_MODEL_PATH

This parameter specifies the file path to the SUPIR model's state dictionary. It is crucial for loading the SUPIR model correctly. The path should point to a valid state dictionary file. If the path is incorrect or the file is missing, the model will fail to load, resulting in an error.

SDXL_MODEL_PATH

This parameter indicates the file path to the SDXL model's state dictionary. It is used to load the initial state of the CLIP models. Similar to the SUPIR model path, this should be a valid file path to ensure successful loading.

clip_config_path

This parameter provides the path to the configuration file for the CLIP text model. The configuration file contains necessary settings and parameters required to initialize the CLIP model correctly.

tokenizer_path

This parameter specifies the path to the tokenizer file for the CLIP model. The tokenizer is essential for processing text inputs and converting them into a format that the CLIP model can understand.

device

This parameter determines the device on which the models will be loaded and executed. Common options include "cpu" and "cuda" (for GPU). Using the appropriate device can significantly impact the performance and speed of model inference.

dtype

This parameter sets the data type for the model's parameters. Common data types include torch.float32 and torch.float16. Choosing the right data type can affect the model's performance and memory usage.

fp8_unet

This boolean parameter indicates whether the UNet model should be converted to FP8 (float8) precision. Using FP8 can reduce memory usage and potentially speed up computations, but it may also affect model accuracy.

fp8_vae

This boolean parameter specifies whether the VAE (Variational Autoencoder) model should be converted to FP8 precision. Similar to fp8_unet, this can optimize memory and computation but may impact accuracy.

use_tiled_vae

This boolean parameter determines whether to use a tiled VAE. Tiling can help manage large images by processing them in smaller, more manageable chunks, which can be beneficial for memory and performance.

encoder_tile_size_pixels

This parameter sets the tile size for the VAE encoder in pixels. It is used when use_tiled_vae is enabled and helps in dividing the input image into smaller tiles for processing.

decoder_tile_size_latent

This parameter sets the tile size for the VAE decoder in latent space. It works in conjunction with use_tiled_vae to manage the output of the VAE by processing smaller latent tiles.

SUPIR Model Loader (v2) (Clip) Output Parameters:

model

This output parameter represents the fully loaded and initialized SUPIR model along with the two CLIP models. It is ready for use in various AI art applications, providing you with a powerful tool for generating and manipulating images.

SUPIR Model Loader (v2) (Clip) Usage Tips:

  • Ensure that the file paths provided for the SUPIR and SDXL models are correct and accessible to avoid loading errors.
  • Use a GPU (cuda) for the device parameter to significantly speed up model loading and inference, especially for large models.
  • Experiment with different data types (dtype) to find a balance between performance and accuracy that suits your needs.
  • If you are working with large images, enable use_tiled_vae and adjust the tile sizes (encoder_tile_size_pixels and decoder_tile_size_latent) to optimize memory usage and performance.

SUPIR Model Loader (v2) (Clip) Common Errors and Solutions:

Failed to load second clip model from SDXL checkpoint

  • Explanation: This error occurs when the node is unable to load the second CLIP model from the specified SDXL checkpoint.
  • Solution: Verify that the SDXL model path is correct and the file is accessible. Ensure that the state dictionary is properly formatted and compatible with the expected model structure.

Failed to load SUPIR model

  • Explanation: This error indicates that the SUPIR model's state dictionary could not be loaded.
  • Solution: Check the SUPIR model path for accuracy and ensure the file exists. Confirm that the state dictionary is not corrupted and is compatible with the SUPIR model.

Failed to load first clip model from SDXL checkpoint

  • Explanation: This error signifies an issue with loading the first CLIP model from the SDXL checkpoint.
  • Solution: Ensure the SDXL model path is correct and the file is present. Verify that the state dictionary is correctly formatted and matches the expected model structure.

Model parameters require_grad is set to True

  • Explanation: This warning indicates that some model parameters are set to require gradients, which may not be necessary for inference.
  • Solution: Ensure that all model parameters have requires_grad set to False to optimize performance during inference. This can be done by iterating over the model parameters and setting param.requires_grad = False.

SUPIR Model Loader (v2) (Clip) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-SUPIR
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.