ComfyUI  >  Nodes  >  Lora-Training-in-Comfy >  Lora Training in Comfy (Advanced)

ComfyUI Node: Lora Training in Comfy (Advanced)

Class Name

Lora Training in Comfy (Advanced)

Category
LJRE/LORA
Author
LarryJane491 (Account age: 165 days)
Extension
Lora-Training-in-Comfy
Latest Updated
6/9/2024
Github Stars
0.3K

How to Install Lora-Training-in-Comfy

Install this extension via the ComfyUI Manager by searching for  Lora-Training-in-Comfy
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Lora-Training-in-Comfy in the search bar
After installation, click the  Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Cloud for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Lora Training in Comfy (Advanced) Description

Node for training LoRA models in ComfyUI with advanced algorithms for customized AI model creation.

Lora Training in Comfy (Advanced):

Lora Training in Comfy (Advanced) is a powerful node designed to facilitate the training of Low-Rank Adaptation (LoRA) models within the ComfyUI environment. This node allows you to fine-tune pre-trained models using your own datasets, enabling the creation of customized AI models that can better suit specific artistic styles or tasks. By leveraging advanced training algorithms such as LoRA, Dylora, Locon, Loha, and Lokr, this node provides flexibility and control over the training process. The primary goal of this node is to simplify the model training process while offering a high degree of customization, making it accessible to AI artists who may not have a deep technical background. With adjustable parameters for network dimensions, learning rates, optimizers, and more, you can achieve optimal training results tailored to your unique requirements.

Lora Training in Comfy (Advanced) Input Parameters:

v2

This parameter specifies whether to use version 2 of the model. Options are "No" and "Yes". Selecting "Yes" sets the is_v2_model flag to 1, indicating the use of version 2.

networkmodule

Defines the network module to be used for training. Options include "networks.lora" and "lycoris.kohya". This parameter determines the underlying architecture and functionalities available during training.

networkdimension

Specifies the dimension of the network. This is an integer value with a default of 32 and a minimum of 0. Adjusting this parameter affects the complexity and capacity of the model.

networkalpha

Sets the alpha value for the network, which influences the learning rate scaling. This is an integer value with a default of 32 and a minimum of 0.

trainingresolution

Defines the resolution at which the training images will be processed. This is an integer value with a default of 512 and a step of 8. Higher resolutions can lead to better quality but require more computational resources.

data_path

Specifies the path to the folder containing the training images. This is a string parameter where you need to provide the directory path.

batch_size

Determines the number of images processed in each training batch. This is an integer value with a default of 1 and a minimum of 1. Larger batch sizes can speed up training but require more memory.

max_train_epoches

Sets the maximum number of training epochs. This is an integer value with a default of 10 and a minimum of 1. More epochs can lead to better model performance but increase training time.

save_every_n_epochs

Specifies how often the model should be saved during training. This is an integer value with a default of 10 and a minimum of 1. Regular saving helps in recovering from interruptions.

keeptokens

Defines the number of tokens to keep during training. This is an integer value with a default of 0 and a minimum of 0. Keeping tokens can help in maintaining certain features of the pre-trained model.

minSNRgamma

Sets the minimum Signal-to-Noise Ratio (SNR) gamma value. This is a float value with a default of 0, a minimum of 0, and a step of 0.1. Adjusting this parameter can help in noise reduction during training.

learningrateText

Specifies the learning rate for the text encoder. This is a float value with a default of 0.0001, a minimum of 0, and a step of 0.00001. The learning rate controls how quickly the model adapts to new data.

learningrateUnet

Sets the learning rate for the U-Net architecture. This is a float value with a default of 0.0001, a minimum of 0, and a step of 0.00001. Proper tuning of this parameter is crucial for effective training.

learningRateScheduler

Defines the learning rate scheduler to be used. Options include "cosine_with_restarts", "linear", "cosine", "polynomial", "constant", and "constant_with_warmup". The scheduler controls how the learning rate changes during training.

lrRestartCycles

Specifies the number of cycles for learning rate restarts. This is an integer value with a default of 1 and a minimum of 1. This parameter is relevant when using the "cosine_with_restarts" scheduler.

optimizerType

Defines the type of optimizer to be used for training. Options include "AdamW8bit", "Lion8bit", and "SGDNesterov". The optimizer affects how the model weights are updated during training.

Lora Training in Comfy (Advanced) Output Parameters:

trained_model

The output is the trained LoRA model. This model can be used for inference or further fine-tuning. The trained model encapsulates the learned patterns and features from the provided training data, making it suitable for generating or enhancing AI art based on the specific styles or tasks it was trained on.

Lora Training in Comfy (Advanced) Usage Tips:

  • Ensure your training dataset is well-curated and representative of the styles or tasks you want the model to learn.
  • Start with the default parameters and gradually adjust them based on the training results and available computational resources.
  • Regularly save the model during training to prevent loss of progress in case of interruptions.
  • Experiment with different learning rate schedulers and optimizers to find the best combination for your specific training scenario.

Lora Training in Comfy (Advanced) Common Errors and Solutions:

"Invalid data path"

  • Explanation: The specified data path does not exist or is incorrect.
  • Solution: Verify that the path to the training images is correct and accessible.

"Out of memory"

  • Explanation: The batch size or training resolution is too high for the available GPU memory.
  • Solution: Reduce the batch size or training resolution to fit within the available memory.

"Invalid network dimension"

  • Explanation: The network dimension value is out of the acceptable range.
  • Solution: Ensure the network dimension is set to a value within the specified range (minimum 0).

"Learning rate too high"

  • Explanation: The specified learning rate is too high, causing unstable training.
  • Solution: Lower the learning rate to a more reasonable value to stabilize training.

"Unsupported optimizer type"

  • Explanation: The selected optimizer type is not supported.
  • Solution: Choose a supported optimizer type from the provided options (e.g., "AdamW8bit", "Lion8bit", "SGDNesterov").

Lora Training in Comfy (Advanced) Related Nodes

Go back to the extension to check out more related nodes.
Lora-Training-in-Comfy
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.