Visit ComfyUI Online for ready-to-use ComfyUI environment
Manage and apply multiple LoRA models for AI art projects, enabling stacking and configuration for artistic effects.
The Fooocus LoraStack node is designed to help you manage and apply multiple LoRA (Low-Rank Adaptation) models in your AI art projects. This node allows you to stack and configure several LoRA models, each with its own strength, to fine-tune the output of your AI-generated images. By enabling or disabling the stack and specifying the number of LoRA models to use, you can achieve a wide range of artistic effects and styles. The primary goal of this node is to provide flexibility and control over the application of LoRA models, making it easier for you to experiment with different combinations and strengths to achieve your desired results.
The toggle
parameter is a boolean option that allows you to enable or disable the stacking of LoRA models. When set to True
, the node will process and stack the specified LoRA models. When set to False
, the node will return an empty stack. This parameter helps you quickly switch between using and not using the LoRA stack without changing other settings. Options: [True, False]
.
The num_loras
parameter specifies the number of LoRA models you want to stack. This integer value determines how many LoRA models will be processed and included in the stack. The higher the number, the more LoRA models you can combine, allowing for more complex and nuanced effects. Minimum value: 0
, Maximum value: 10
, Default value: 1
.
The optional_lora_stack
parameter allows you to provide an existing stack of LoRA models that can be extended with additional models specified in the node. This parameter is useful if you have a pre-configured stack that you want to build upon. The provided stack should be in the format of a list of LoRA model names and their corresponding strengths. Type: LORA_STACK
.
These parameters (lora_1_name
to lora_10_name
) allow you to specify the names of the LoRA models you want to include in the stack. Each parameter corresponds to a different LoRA model, and you can select from a list of available models or choose None
if you do not want to use a particular slot. Default value: None
.
These parameters (lora_1_strength
to lora_10_strength
) allow you to set the strength of each corresponding LoRA model in the stack. The strength value is a float that determines the influence of the LoRA model on the final output. Higher values increase the model's impact, while lower values reduce it. Minimum value: -10.0
, Maximum value: 10.0
, Default value: 1.0
, Step: 0.01
.
The lora_stack
output parameter returns the final stack of LoRA models that have been processed and configured based on the input parameters. This stack is a list of LoRA model names and their corresponding strengths, which can be used in subsequent nodes or processes to apply the desired effects to your AI-generated images. The output provides a flexible and customizable way to manage and apply multiple LoRA models in your projects.
toggle
parameter to quickly compare the effects of using the LoRA stack versus not using it. This can help you decide whether the stacked models enhance your project.width x height
, where both width and height are integers. For example, 1920 x 1080
.None
.None
. Ensure that the model names are correctly listed in the input parameters.-10.0
to 10.0
. Adjust any values that fall outside this range to be within the acceptable limits.© Copyright 2024 RunComfy. All Rights Reserved.