Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates layer diffusion in AI art with user-friendly interface, various methods, and artistic effects.
The easy kSamplerLayerDiffusion node is designed to facilitate the process of layer diffusion in AI-generated art, providing a streamlined and user-friendly interface for artists. This node allows you to apply various diffusion methods to layers within your pipeline, enhancing the creative possibilities and control over the final output. By leveraging different sampling techniques and schedulers, you can achieve a wide range of artistic effects, from subtle blending to more pronounced transformations. The node is particularly useful for those looking to experiment with different diffusion strategies without delving into complex technical details, making it an essential tool for both novice and experienced AI artists.
This parameter represents the pipeline that the node will operate on. It is a required input and serves as the foundation for all subsequent operations within the node.
Specifies the diffusion method to be applied. Options include FG_ONLY_ATTN
, FG_ONLY_CONV
, EVERYTHING
, FG_TO_BLEND
, and BG_TO_BLEND
. Each method offers a different approach to layer diffusion, affecting how the layers interact and blend with each other.
A floating-point value that determines the intensity of the diffusion effect. The default value is 1.0, with a minimum of -1 and a maximum of 3. Adjusting this parameter allows you to fine-tune the strength of the diffusion applied to the layers.
An integer value that sets the number of diffusion steps to be performed. The default is 20, with a minimum of 1. Increasing the number of steps can result in more refined and detailed diffusion effects.
A floating-point value that controls the configuration of the diffusion process. This parameter allows for additional customization of the diffusion behavior, though specific details on its impact may vary based on the method and other settings.
Specifies the name of the sampler to be used. The default is euler
, but other options are available depending on the samplers supported by the system. This parameter influences the sampling technique applied during the diffusion process.
Determines the scheduler to be used for the diffusion process. The default is normal
, but additional schedulers can be specified. This parameter affects the timing and sequence of the diffusion steps.
A floating-point value that controls the level of denoising applied during the diffusion process. The default is 1.0, with a minimum of 0.0 and a maximum of 1.0. Adjusting this parameter can help reduce noise and improve the clarity of the final output.
An integer value used to initialize the random number generator for the diffusion process. The default is 0, with a minimum value of 0 and a maximum defined by MAX_SEED_NUM
. Setting a specific seed ensures reproducibility of the results.
An optional input parameter that allows you to provide an initial image for the diffusion process. This image serves as the starting point for the diffusion.
An optional input parameter that allows you to provide a blended image for the diffusion process. This image is used in conjunction with the initial image to achieve the desired blending effect.
An optional input parameter that allows you to provide a mask for the diffusion process. The mask defines the areas of the image that will be affected by the diffusion.
A hidden input parameter that allows you to provide a textual prompt for the diffusion process. This prompt can influence the behavior and outcome of the diffusion.
A hidden input parameter that allows you to provide additional PNG information for the diffusion process. This information can be used to customize the diffusion further.
A hidden input parameter that allows you to provide a unique identifier for the diffusion process. This identifier can be used to track and manage different diffusion operations.
The primary output of the node, representing the modified pipeline after the diffusion process. This output can be used as input for subsequent nodes or for final rendering.
The final image generated after the diffusion process. This image reflects all the applied diffusion methods and settings, providing a visual representation of the node's effect.
The original image before any diffusion was applied. This output allows you to compare the initial and final states of the image, helping you understand the impact of the diffusion process.
An array representing the alpha values used during the diffusion process. These values indicate the transparency levels applied to different parts of the image, contributing to the blending and layering effects.
method
settings to achieve various artistic effects. Each method offers a unique approach to layer diffusion.weight
parameter to fine-tune the intensity of the diffusion. Higher values can create more pronounced effects, while lower values result in subtler changes.steps
parameter to control the level of detail in the diffusion. More steps can lead to more refined results but may increase processing time.seed
value to ensure reproducibility of your results, especially when experimenting with different settings.image
, blended_image
, and mask
parameters to provide additional control over the diffusion process and achieve more complex effects.method
parameter is set to one of the supported options: FG_ONLY_ATTN
, FG_ONLY_CONV
, EVERYTHING
, FG_TO_BLEND
, or BG_TO_BLEND
.denoise
parameter is set to a value outside the allowed range (0.0 to 1.0).denoise
parameter to a value within the specified range to resolve this issue.seed
parameter is set to a value outside the allowed range.seed
parameter is set to a value between 0 and MAX_SEED_NUM
. Adjust the seed value accordingly.© Copyright 2024 RunComfy. All Rights Reserved.