ComfyUI > Nodes > gguf > GGUF Loader (Advanced)

ComfyUI Node: GGUF Loader (Advanced)

Class Name

LoaderGGUFAdvanced

Category
gguf
Author
calcuis (Account age: 905days)
Extension
gguf
Latest Updated
2025-03-08
Github Stars
0.02K

How to Install gguf

Install this extension via the ComfyUI Manager by searching for gguf
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter gguf in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

GGUF Loader (Advanced) Description

Advanced loading capabilities for GGUF models with enhanced flexibility and control over loading process.

GGUF Loader (Advanced):

The LoaderGGUFAdvanced node is designed to provide advanced loading capabilities for GGUF models, offering enhanced flexibility and control over the model loading process. This node extends the basic functionality of the LoaderGGUF by allowing users to specify additional parameters that can influence the model's behavior and performance. It is particularly beneficial for users who need to fine-tune the model loading process to suit specific requirements, such as optimizing for different data types or hardware configurations. By providing options to adjust quantization and patching settings, the LoaderGGUFAdvanced node empowers users to achieve more efficient and tailored model deployments, making it an essential tool for those looking to maximize the potential of their GGUF models.

GGUF Loader (Advanced) Input Parameters:

gguf_name

The gguf_name parameter specifies the name of the GGUF model to be loaded. It is a required parameter and allows you to select from a list of available model names. This parameter is crucial as it determines which model will be loaded and used for further processing. There are no minimum or maximum values, but the selection is limited to the models available in the specified directory.

dequant_dtype

The dequant_dtype parameter allows you to specify the data type for dequantization. The available options are default, target, float32, float16, and bfloat16, with default being the default value. This parameter impacts the precision and performance of the model by determining how the model's weights are dequantized during loading. Choosing a lower precision data type like float16 can improve performance on compatible hardware but may affect accuracy.

patch_dtype

The patch_dtype parameter specifies the data type for patching operations. Similar to dequant_dtype, the options are default, target, float32, float16, and bfloat16, with default as the default value. This parameter affects how patches are applied to the model, influencing both performance and precision. Selecting a suitable data type can optimize the model's execution on specific hardware.

patch_on_device

The patch_on_device parameter is a boolean that determines whether patching operations should be performed on the device (e.g., GPU) or not. The default value is False, meaning patching is done on the CPU by default. Enabling this option (True) can enhance performance by leveraging the computational power of the device, especially for large models or complex patching operations.

GGUF Loader (Advanced) Output Parameters:

MODEL

The output of the LoaderGGUFAdvanced node is a MODEL object, which represents the loaded GGUF model. This output is crucial as it serves as the foundation for subsequent operations and processing within the AI pipeline. The MODEL object encapsulates the model's architecture, weights, and any applied patches, making it ready for inference or further customization.

GGUF Loader (Advanced) Usage Tips:

  • To optimize performance, consider setting dequant_dtype and patch_dtype to float16 or bfloat16 if your hardware supports these data types, as they can significantly reduce memory usage and increase speed.
  • Enable patch_on_device if you are working with large models and have a powerful GPU, as this can offload computationally intensive tasks from the CPU to the GPU, improving overall efficiency.

GGUF Loader (Advanced) Common Errors and Solutions:

ERROR: Could not detect model type of: {model_path}

  • Explanation: This error occurs when the node is unable to determine the type of the model specified by the gguf_name parameter. It may be due to an unsupported model format or a corrupted model file.
  • Solution: Ensure that the model file is correctly formatted and not corrupted. Verify that the model is supported by the node and try reloading the model. If the issue persists, consider checking for updates to the node that may include support for additional model types.

Unknown CLIP model type {name}

  • Explanation: This error indicates that the specified CLIP model type is not recognized by the system. It may be due to an outdated node version or an incorrect model type name.
  • Solution: Verify that the model type name is correct and matches one of the supported types. If the error persists, consider updating the node to the latest version, which may include support for additional CLIP model types.

GGUF Loader (Advanced) Related Nodes

Go back to the extension to check out more related nodes.
gguf
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.