Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates combining multiple LoRA models for enhanced AI art generation with customizable weights and complex modifications.
The CR LoRA Stack node is designed to facilitate the combination and application of multiple LoRA (Low-Rank Adaptation) models to enhance your AI art generation process. This node allows you to stack up to three different LoRA models, each with customizable weights for both the model and the CLIP (Contrastive Language-Image Pre-Training) components. By leveraging this node, you can create complex and nuanced modifications to your base model, enabling more sophisticated and varied artistic outputs. The primary goal of the CR LoRA Stack is to provide a flexible and powerful tool for AI artists to experiment with different LoRA combinations, thereby expanding the creative possibilities and fine-tuning the generated art to meet specific artistic visions.
This parameter specifies the name of the first LoRA model to be included in the stack. It is essential for identifying which LoRA model to apply. If set to "None," this LoRA model will not be included in the stack. The impact of this parameter is significant as it determines the first layer of adaptation applied to your base model. There are no minimum or maximum values, but valid LoRA model names should be used.
This parameter sets the weight for the first LoRA model's impact on the base model. It controls how strongly the first LoRA model influences the final output. The weight can range from 0.0 (no influence) to 1.0 (full influence), with a default value typically around 0.5 for balanced adaptation.
This parameter sets the weight for the first LoRA model's impact on the CLIP component. Similar to model_weight_1
, it controls the influence on the CLIP model, ranging from 0.0 to 1.0, with a default value around 0.5.
This parameter is a toggle switch that determines whether the first LoRA model is active ("On") or inactive ("Off"). When set to "Off," the first LoRA model is ignored, regardless of the other parameters.
This parameter specifies the name of the second LoRA model to be included in the stack. It functions similarly to lora_name_1
and is crucial for adding another layer of adaptation. Valid LoRA model names should be used.
This parameter sets the weight for the second LoRA model's impact on the base model. It ranges from 0.0 to 1.0, with a default value around 0.5, controlling the influence of the second LoRA model.
This parameter sets the weight for the second LoRA model's impact on the CLIP component. It ranges from 0.0 to 1.0, with a default value around 0.5, controlling the influence on the CLIP model.
This parameter is a toggle switch that determines whether the second LoRA model is active ("On") or inactive ("Off"). When set to "Off," the second LoRA model is ignored.
This parameter specifies the name of the third LoRA model to be included in the stack. It functions similarly to lora_name_1
and lora_name_2
, adding another layer of adaptation. Valid LoRA model names should be used.
This parameter sets the weight for the third LoRA model's impact on the base model. It ranges from 0.0 to 1.0, with a default value around 0.5, controlling the influence of the third LoRA model.
This parameter sets the weight for the third LoRA model's impact on the CLIP component. It ranges from 0.0 to 1.0, with a default value around 0.5, controlling the influence on the CLIP model.
This parameter is a toggle switch that determines whether the third LoRA model is active ("On") or inactive ("Off"). When set to "Off," the third LoRA model is ignored.
This optional parameter allows you to pass an existing stack of LoRA models. If provided, the node will extend this stack with the specified LoRA models and their respective weights. This parameter is useful for building upon previously defined stacks.
This output parameter returns the final stack of LoRA models, including all specified models and their respective weights. The stack is a list of tuples, where each tuple contains the LoRA model name, model weight, and CLIP weight. This output is crucial for further processing or applying the stacked LoRA models to your base model.
This output parameter provides a URL to the documentation or help page for the CR LoRA Stack node. It is useful for users who need additional information or guidance on using the node effectively.
lora_stack
parameter to build upon existing stacks, allowing for more complex and layered adaptations.switch_1
, switch_2
, switch_3
) to quickly enable or disable specific LoRA models without removing them from the stack.© Copyright 2024 RunComfy. All Rights Reserved.