ComfyUI  >  Nodes  >  ComfyUI Easy Use >  PreSampling (LayerDiffuse)

ComfyUI Node: PreSampling (LayerDiffuse)

Class Name

easy preSamplingLayerDiffusion

Category
EasyUse/PreSampling
Author
yolain (Account age: 1341 days)
Extension
ComfyUI Easy Use
Latest Updated
6/25/2024
Github Stars
0.5K

How to Install ComfyUI Easy Use

Install this extension via the ComfyUI Manager by searching for  ComfyUI Easy Use
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI Easy Use in the search bar
After installation, click the  Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

PreSampling (LayerDiffuse) Description

Facilitates layer diffusion in image generation workflows with customizable methods for enhanced output quality.

PreSampling (LayerDiffuse):

The easy preSamplingLayerDiffusion node is designed to facilitate the process of layer diffusion in image generation workflows. This node allows you to apply various layer diffusion methods to your images, enhancing the quality and detail of the generated outputs. By leveraging different diffusion techniques, you can achieve more refined and visually appealing results. The node is particularly useful for AI artists looking to experiment with and fine-tune their image generation processes, providing a range of customizable parameters to control the diffusion effect. Whether you are working with standard diffusion models or more advanced ones like SDXL and SD1.5, this node offers the flexibility to adapt to different scenarios and artistic needs.

PreSampling (LayerDiffuse) Input Parameters:

pipe

This parameter represents the pipeline through which the image data flows. It is essential for maintaining the sequence of operations and ensuring that the layer diffusion is applied correctly. The type is PIPE_LINE.

method

This parameter specifies the layer diffusion method to be used. Options include FG_ONLY_ATTN, FG_ONLY_CONV, EVERYTHING, FG_TO_BLEND, and BG_TO_BLEND. Each method offers a different approach to blending and diffusing layers, allowing you to choose the one that best suits your artistic vision.

weight

This parameter controls the intensity of the layer diffusion effect. It is a float value with a default of 1.0, a minimum of -1, and a maximum of 3, with increments of 0.05. Adjusting the weight can significantly impact the final appearance of the image, making it either more subtle or more pronounced.

steps

This parameter determines the number of steps to be taken during the diffusion process. It is an integer value with a default of 20, a minimum of 1, and a maximum value that depends on the specific implementation. More steps generally result in a more detailed and refined image.

cfg

This parameter stands for configuration and is used to set various internal settings for the diffusion process. It is a float value with a default of 1.0, a minimum of 0.0, and a maximum of 100.0. Adjusting the cfg can help fine-tune the diffusion effect to achieve the desired outcome.

sampler_name

This parameter specifies the name of the sampler to be used. Options include various samplers provided by comfy.samplers.KSampler.SAMPLERS, with the default being euler. The choice of sampler can affect the quality and style of the generated image.

scheduler

This parameter sets the scheduler for the diffusion process. It includes options from comfy.samplers.KSampler.SCHEDULERS plus any new schedulers added, with the default being normal. The scheduler controls the timing and sequence of the diffusion steps.

denoise

This parameter controls the level of denoising applied during the diffusion process. It is a float value with a default of 1.0, a minimum of 0.0, and a maximum of 1.0, with increments of 0.01. Denoising helps to reduce artifacts and improve the overall quality of the image.

seed

This parameter sets the random seed for the diffusion process, ensuring reproducibility of results. It is an integer value with a default of 0 and a minimum of 0. The maximum value depends on the implementation, but it is typically a large number to provide a wide range of possible outcomes.

image (optional)

This parameter allows you to input an initial image to be used as the starting point for the diffusion process. The type is IMAGE.

blended_image (optional)

This parameter allows you to input a blended image that can be used in conjunction with the initial image to create more complex diffusion effects. The type is IMAGE.

mask (optional)

This parameter allows you to input a mask that can be used to control which parts of the image are affected by the diffusion process. The type is MASK.

prompt (hidden)

This parameter is used internally to store the prompt for the diffusion process. The type is PROMPT.

extra_pnginfo (hidden)

This parameter is used internally to store additional PNG information. The type is EXTRA_PNGINFO.

my_unique_id (hidden)

This parameter is used internally to store a unique identifier for the diffusion process. The type is UNIQUE_ID.

PreSampling (LayerDiffuse) Output Parameters:

pipe

This output parameter represents the updated pipeline after the layer diffusion has been applied. It ensures that the sequence of operations is maintained and that the resulting image is correctly processed. The type is PIPE_LINE.

PreSampling (LayerDiffuse) Usage Tips:

  • Experiment with different method options to see how each one affects the final image. Each method offers a unique approach to layer diffusion.
  • Adjust the weight parameter to control the intensity of the diffusion effect. Higher weights can create more dramatic changes, while lower weights result in subtler effects.
  • Use the steps parameter to increase the number of diffusion steps for more detailed and refined images.
  • Try different sampler_name and scheduler combinations to find the best settings for your specific artistic needs.
  • Utilize the denoise parameter to reduce artifacts and improve image quality, especially when working with high-detail images.

PreSampling (LayerDiffuse) Common Errors and Solutions:

Only SDXL and SD1.5 model supported for Layer Diffusion

  • Explanation: This error occurs when you attempt to use a model that is not supported by the layer diffusion process.
  • Solution: Ensure that you are using either the SDXL or SD1.5 model for the layer diffusion process.

Invalid method for the selected model

  • Explanation: This error occurs when the chosen layer diffusion method is not compatible with the selected model.
  • Solution: Verify that the method you selected is supported by the model you are using. Refer to the documentation for compatible methods.

Missing required input parameters

  • Explanation: This error occurs when one or more required input parameters are not provided.
  • Solution: Ensure that all required input parameters, such as pipe, method, weight, steps, cfg, sampler_name, scheduler, denoise, and seed, are correctly specified.

Invalid parameter value

  • Explanation: This error occurs when an input parameter value is outside the allowed range.
  • Solution: Check the parameter values to ensure they fall within the specified minimum and maximum limits. Adjust the values accordingly.

PreSampling (LayerDiffuse) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI Easy Use
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.