ComfyUI Node: LoRA Stacker

Class Name

LoRA Stacker

Category
Efficiency Nodes/Stackers
Author
jags111 (Account age: 3922days)
Extension
Efficiency Nodes for ComfyUI Version 2.0+
Latest Updated
2024-08-07
Github Stars
0.83K

How to Install Efficiency Nodes for ComfyUI Version 2.0+

Install this extension via the ComfyUI Manager by searching for Efficiency Nodes for ComfyUI Version 2.0+
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Efficiency Nodes for ComfyUI Version 2.0+ in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

LoRA Stacker Description

Efficiently combine and manage multiple LoRA models for AI art generation tasks.

LoRA Stacker:

The LoRA Stacker node is designed to efficiently manage and combine multiple LoRA (Low-Rank Adaptation) models, which are used to fine-tune large language models. This node allows you to stack multiple LoRA models together, either in a simple mode where each model is assigned a single weight or in a more complex mode where each model can have separate weights for different components. The primary benefit of using the LoRA Stacker is its ability to streamline the process of combining multiple LoRA models, making it easier to manage and apply them in various AI art generation tasks. By stacking these models, you can leverage the strengths of each individual model to create more nuanced and sophisticated outputs.

LoRA Stacker Input Parameters:

input_mode

The input_mode parameter determines how the LoRA models are combined. It can be set to either "simple" or "complex". In "simple" mode, each LoRA model is assigned a single weight, while in "complex" mode, each model can have separate weights for different components. This parameter impacts how the LoRA models are processed and combined, affecting the final output. The available options are "simple" and "complex".

lora_count

The lora_count parameter specifies the number of LoRA models to be stacked. This parameter is crucial as it defines how many models will be processed and combined by the node. The minimum value is 1, and there is no explicit maximum value, but it should be set according to the number of available LoRA models you wish to stack.

lora_stack

The lora_stack parameter is an optional input that allows you to provide an existing stack of LoRA models to be extended with additional models. If provided, the node will append the new models to this existing stack. This parameter is useful for incrementally building a stack of LoRA models over multiple operations.

lora_name_X

The lora_name_X parameters (where X is a number from 1 to lora_count) specify the names of the LoRA models to be included in the stack. These parameters are used to identify and load the respective LoRA models. Each lora_name_X should be a valid model name or path.

lora_wt_X

The lora_wt_X parameters (where X is a number from 1 to lora_count) are used in "simple" mode to assign weights to each LoRA model. These weights determine the influence of each model in the final stack. The values should be floating-point numbers, with typical values ranging from 0.0 to 1.0.

model_str_X

The model_str_X parameters (where X is a number from 1 to lora_count) are used in "complex" mode to assign weights to the model component of each LoRA model. These weights determine the influence of the model component in the final stack. The values should be floating-point numbers, with typical values ranging from 0.0 to 1.0.

clip_str_X

The clip_str_X parameters (where X is a number from 1 to lora_count) are used in "complex" mode to assign weights to the clip component of each LoRA model. These weights determine the influence of the clip component in the final stack. The values should be floating-point numbers, with typical values ranging from 0.0 to 1.0.

LoRA Stacker Output Parameters:

loras

The loras output parameter is a list of tuples representing the stacked LoRA models. Each tuple contains the name of the LoRA model and its respective weights. In "simple" mode, each tuple has the format (lora_name, lora_weight, lora_weight), while in "complex" mode, the format is (lora_name, model_str, clip_str). This output is crucial for further processing and application of the stacked LoRA models in AI art generation tasks.

LoRA Stacker Usage Tips:

  • Use the "simple" mode for straightforward stacking of LoRA models with uniform weights, which is easier to manage and understand.
  • Opt for the "complex" mode when you need more granular control over the weights of different components of each LoRA model, allowing for more sophisticated combinations.
  • Start with a smaller lora_count to understand the impact of each model in the stack before scaling up to a larger number of models.
  • Utilize the lora_stack parameter to incrementally build and refine your stack of LoRA models over multiple operations.

LoRA Stacker Common Errors and Solutions:

"LoRA model name is None"

  • Explanation: This error occurs when one of the lora_name_X parameters is set to "None".
  • Solution: Ensure that all lora_name_X parameters are set to valid model names or paths.

"Invalid weight value"

  • Explanation: This error occurs when one of the weight parameters (lora_wt_X, model_str_X, clip_str_X) is set to an invalid value.
  • Solution: Check that all weight parameters are floating-point numbers within the valid range (typically 0.0 to 1.0).

"Mismatch in lora_count and provided parameters"

  • Explanation: This error occurs when the number of provided lora_name_X parameters does not match the lora_count.
  • Solution: Ensure that the number of lora_name_X parameters matches the value specified in lora_count.

"lora_stack is not a list"

  • Explanation: This error occurs when the lora_stack parameter is not provided as a list.
  • Solution: Ensure that the lora_stack parameter, if provided, is a list of tuples representing existing LoRA models.

LoRA Stacker Related Nodes

Go back to the extension to check out more related nodes.
Efficiency Nodes for ComfyUI Version 2.0+
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.