Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates training LoRA models for AI artists to create custom, efficient, personalized AI-generated art.
The Eden_LoRa_trainer node is designed to facilitate the training of Low-Rank Adaptation (LoRA) models, which are specialized for fine-tuning large-scale AI models with a focus on specific styles, faces, or objects. This node is particularly beneficial for AI artists who want to create custom models that can generate images with unique characteristics or styles based on their own datasets. By leveraging LoRA, the node allows for efficient training with reduced computational resources, making it accessible for users without extensive technical expertise. The primary goal of the Eden_LoRa_trainer is to streamline the process of training and fine-tuning models, providing a user-friendly interface to achieve high-quality, personalized AI-generated art.
This parameter specifies the path to the folder containing the training images. It is crucial for the node to know where to find the images that will be used for training the LoRA model. The default value is "."
, which refers to the current directory.
This parameter determines the type of model you are training. The available options are "style"
, "face"
, and "object"
. The default value is "style"
. Choosing the correct mode is essential as it influences the training process and the resulting model's performance.
This parameter sets the name for the LoRA model being trained. It helps in identifying and organizing different models. The default value is "Eden_Token_LoRa"
.
This parameter specifies the name of the checkpoint file to be used. It is essential for resuming training from a specific point or using a pre-trained model as a starting point. The available options are derived from the list of checkpoint filenames.
This parameter defines the resolution at which the training images will be processed. It accepts integer values with a minimum of 256, a maximum of 1024, and a default of 512. The resolution impacts the quality and detail of the trained model.
This parameter sets the number of images processed in each training batch. It accepts integer values with a minimum of 1, a maximum of 8, and a default of 4. The batch size affects the training speed and memory usage.
This parameter determines the maximum number of training steps. It accepts integer values with a minimum of 10, a maximum of 10000, and a default of 300. The number of steps influences the training duration and the model's convergence.
This parameter sets the learning rate for textual inversion. It accepts float values with a minimum of 0.0, a maximum of 0.005, and a default of 0.001. The learning rate affects the speed and stability of the training process.
This parameter sets the learning rate for the U-Net model. It accepts float values with a minimum of 0.0, a maximum of 0.005, and a default of 0.0005. The learning rate is crucial for the model's performance and convergence.
This parameter defines the rank of the LoRA model. It accepts integer values with a minimum of 1, a maximum of 64, and a default of 16. The rank impacts the model's capacity and efficiency.
This boolean parameter determines whether to disable textual inversion. The default value is False
. Disabling textual inversion can be useful in certain training scenarios.
This parameter sets the number of tokens used in training. It accepts integer values with a minimum of 1, a maximum of 5, and a default of 3. The number of tokens influences the model's ability to learn and represent different concepts.
This parameter specifies the frequency of saving checkpoints during training. It accepts integer values with a minimum of 10, a maximum of 10000, and a default of 200. Regular checkpoints help in resuming training and preventing data loss.
This parameter sets the number of sample images generated during training. It accepts integer values with a minimum of 2, a maximum of 10, and a default of 4. Sample images provide visual feedback on the model's progress.
This parameter defines the scale of the LoRA model applied to the sample images. It accepts float values with a minimum of 0.0, a maximum of 1.25, and a default of 0.7. The scale affects the intensity of the applied style or concept.
This boolean parameter determines whether to save training graphs on disk. The default value is False
. Saving graphs can help in analyzing the training process and performance.
This parameter sets the random seed for reproducibility. It accepts integer values with a minimum of 0, a maximum of 100000, and a default of 0. Setting a seed ensures consistent results across different runs.
This output parameter provides a set of sample images generated during the training process. These images help in visually assessing the model's progress and quality.
This output parameter specifies the file path to the trained LoRA model. It is essential for loading and using the trained model in future tasks.
This output parameter provides the file path to the embeddings generated during training. These embeddings are crucial for the model's ability to understand and generate specific styles or concepts.
This output parameter delivers a final message indicating the completion of the training process. It provides a summary of the training duration and any relevant information.
mode
settings to find the best fit for your specific use case, whether it's style, face, or object.training_resolution
and train_batch_size
based on your hardware capabilities to balance between training speed and model quality.save_checkpoint_every_n_steps
parameter to prevent data loss and facilitate resuming training if needed.plot_training_graphs_on_disk
option to analyze the training process and make informed adjustments to the parameters.<error_message>
"<error_message>
. Please check the modules specified in --lora_unet_blocks are correct"--lora_unet_blocks
parameter and ensure they are accurate and correctly formatted.special_params.json
file exists in the specified path and is up-to-date. If not, retrain your concept with the latest trainer version.© Copyright 2024 RunComfy. All Rights Reserved.