Visit ComfyUI Online for ready-to-use ComfyUI environment
Powerful node for generating high-quality images from text prompts using machine learning, masks, and customizable parameters.
UltraEdit_Generation_Zho is a powerful node designed to facilitate the generation of high-quality images based on textual prompts and optional image inputs. This node leverages advanced machine learning models to interpret and transform your input data into visually compelling outputs. It is particularly useful for AI artists looking to create detailed and customized images by providing both positive and negative prompts, which guide the generation process. The node also supports the use of masks to refine specific areas of the image, offering a high degree of control over the final result. By adjusting various parameters such as the number of inference steps, guidance scales, and seed values, you can fine-tune the output to meet your artistic vision.
This parameter expects a pre-loaded model pipeline (UEMODEL
) that will be used for image generation. The pipeline contains all the necessary components, such as tokenizers, schedulers, and encoders, to process the input data and generate the output image.
This parameter accepts an input image (IMAGE
) that serves as the base for the generation process. The image will be resized to the closest area of 512x512 pixels to ensure optimal processing.
This is a textual prompt (STRING
) that describes the desired elements or features to be included in the generated image. The default value is "cat", and it supports multiline input for more complex descriptions.
This is a textual prompt (STRING
) that specifies the elements or features to be excluded from the generated image. The default value is "worst quality, low quality", and it supports multiline input for detailed exclusions.
This parameter (INT
) defines the number of inference steps to be taken during the generation process. The default value is 50, with a minimum of 1 and a maximum of 100. More steps generally result in higher quality images but require more processing time.
This parameter (FLOAT
) controls the influence of the input image on the final output. The default value is 1.5, with a range from 0 to 2.5. Higher values make the generated image more closely resemble the input image.
This parameter (FLOAT
) adjusts the influence of the textual prompts on the generated image. The default value is 7.5, with a range from 0 to 12.5. Higher values make the generated image more closely follow the textual descriptions.
This parameter (INT
) sets the random seed for the generation process, ensuring reproducibility. The default value is 0, with a range from 0 to 0xffffffffffffffff. Changing the seed value will result in different generated images even with the same other parameters.
This optional parameter (IMAGE
) allows you to provide a mask image that specifies areas of the input image to be preserved or altered. If not provided, a default white mask will be used, indicating no specific areas of focus.
The output parameter is an image (IMAGE
) generated based on the provided inputs and parameters. This image reflects the combined influence of the input image, textual prompts, and any optional mask, processed through the specified model pipeline.
text_guidance_scale
and image_guidance_scale
to find the right balance between the input image and textual prompts.seed
parameter to generate multiple variations of an image with the same settings, helping you choose the best result.steps
to quickly preview the results and then increase the steps for the final high-quality image.pipe
parameter is not provided or incorrectly loaded.image
parameter is not in the expected format or resolution.positive
or negative
prompts exceed the maximum allowed length.mask
parameter is provided but the image file is not found.© Copyright 2024 RunComfy. All Rights Reserved.