Visit ComfyUI Online for ready-to-use ComfyUI environment
Node for training LoRA models in ComfyUI with advanced algorithms for customized AI model creation.
Lora Training in Comfy (Advanced) is a powerful node designed to facilitate the training of Low-Rank Adaptation (LoRA) models within the ComfyUI environment. This node allows you to fine-tune pre-trained models using your own datasets, enabling the creation of customized AI models that can better suit specific artistic styles or tasks. By leveraging advanced training algorithms such as LoRA, Dylora, Locon, Loha, and Lokr, this node provides flexibility and control over the training process. The primary goal of this node is to simplify the model training process while offering a high degree of customization, making it accessible to AI artists who may not have a deep technical background. With adjustable parameters for network dimensions, learning rates, optimizers, and more, you can achieve optimal training results tailored to your unique requirements.
This parameter specifies whether to use version 2 of the model. Options are "No" and "Yes". Selecting "Yes" sets the is_v2_model
flag to 1, indicating the use of version 2.
Defines the network module to be used for training. Options include "networks.lora" and "lycoris.kohya". This parameter determines the underlying architecture and functionalities available during training.
Specifies the dimension of the network. This is an integer value with a default of 32 and a minimum of 0. Adjusting this parameter affects the complexity and capacity of the model.
Sets the alpha value for the network, which influences the learning rate scaling. This is an integer value with a default of 32 and a minimum of 0.
Defines the resolution at which the training images will be processed. This is an integer value with a default of 512 and a step of 8. Higher resolutions can lead to better quality but require more computational resources.
Specifies the path to the folder containing the training images. This is a string parameter where you need to provide the directory path.
Determines the number of images processed in each training batch. This is an integer value with a default of 1 and a minimum of 1. Larger batch sizes can speed up training but require more memory.
Sets the maximum number of training epochs. This is an integer value with a default of 10 and a minimum of 1. More epochs can lead to better model performance but increase training time.
Specifies how often the model should be saved during training. This is an integer value with a default of 10 and a minimum of 1. Regular saving helps in recovering from interruptions.
Defines the number of tokens to keep during training. This is an integer value with a default of 0 and a minimum of 0. Keeping tokens can help in maintaining certain features of the pre-trained model.
Sets the minimum Signal-to-Noise Ratio (SNR) gamma value. This is a float value with a default of 0, a minimum of 0, and a step of 0.1. Adjusting this parameter can help in noise reduction during training.
Specifies the learning rate for the text encoder. This is a float value with a default of 0.0001, a minimum of 0, and a step of 0.00001. The learning rate controls how quickly the model adapts to new data.
Sets the learning rate for the U-Net architecture. This is a float value with a default of 0.0001, a minimum of 0, and a step of 0.00001. Proper tuning of this parameter is crucial for effective training.
Defines the learning rate scheduler to be used. Options include "cosine_with_restarts", "linear", "cosine", "polynomial", "constant", and "constant_with_warmup". The scheduler controls how the learning rate changes during training.
Specifies the number of cycles for learning rate restarts. This is an integer value with a default of 1 and a minimum of 1. This parameter is relevant when using the "cosine_with_restarts" scheduler.
Defines the type of optimizer to be used for training. Options include "AdamW8bit", "Lion8bit", and "SGDNesterov". The optimizer affects how the model weights are updated during training.
The output is the trained LoRA model. This model can be used for inference or further fine-tuning. The trained model encapsulates the learned patterns and features from the provided training data, making it suitable for generating or enhancing AI art based on the specific styles or tasks it was trained on.
© Copyright 2024 RunComfy. All Rights Reserved.