Visit ComfyUI Online for ready-to-use ComfyUI environment
Enhance AI art generation by encoding positive and negative prompts for refined image output.
The CLIP Positive-Negative XL (WLSH) node is designed to enhance your AI art generation by leveraging the power of CLIP (Contrastive Language-Image Pre-Training) to encode both positive and negative textual prompts. This node allows you to input descriptive text that you want to emphasize (positive) and text that you want to de-emphasize or avoid (negative). By encoding these texts, the node helps in conditioning the AI model to generate images that align more closely with your creative vision. This dual conditioning approach ensures that the generated images are not only guided by what you want to see but also by what you want to avoid, providing a more refined and controlled output.
This parameter expects a CLIP model instance. The CLIP model is responsible for encoding the provided textual prompts into a format that can be used for conditioning the AI model. The quality and characteristics of the generated images heavily depend on the capabilities of the CLIP model used.
This is a string parameter where you input the global positive text prompt. This text describes the elements or features you want to emphasize in the generated image. The input can be multiline, allowing for detailed descriptions. There is no strict limit on the length, but more concise prompts may yield better results.
This is a string parameter for the local positive text prompt. Similar to positive_g
, this text describes additional elements or features you want to emphasize but with a more localized focus. This can be useful for adding specific details to certain parts of the image. The input can be multiline.
This is a string parameter where you input the global negative text prompt. This text describes the elements or features you want to avoid in the generated image. The input can be multiline, allowing for detailed descriptions. There is no strict limit on the length, but more concise prompts may yield better results.
This is a string parameter for the local negative text prompt. Similar to negative_g
, this text describes additional elements or features you want to avoid but with a more localized focus. This can be useful for removing specific details from certain parts of the image. The input can be multiline.
This integer parameter specifies the width of the generated image. The default value is 1024, but it can be adjusted to fit your specific needs. The minimum value is 16, and the maximum value is determined by the capabilities of your hardware and the CLIP model.
This integer parameter specifies the height of the generated image. The default value is 1024, but it can be adjusted to fit your specific needs. The minimum value is 16, and the maximum value is determined by the capabilities of your hardware and the CLIP model.
This integer parameter specifies the width of the crop area. The default value is 0, meaning no cropping will be applied. Adjusting this value allows you to focus on a specific part of the generated image.
This integer parameter specifies the height of the crop area. The default value is 0, meaning no cropping will be applied. Adjusting this value allows you to focus on a specific part of the generated image.
This integer parameter specifies the target width for the final image after any cropping or resizing. The default value is twice the width, providing a higher resolution output.
This integer parameter specifies the target height for the final image after any cropping or resizing. The default value is twice the height, providing a higher resolution output.
This output parameter provides the encoded positive conditioning data. It is a tuple containing the encoded positive text and additional metadata. This data is used to guide the AI model towards generating images that align with the positive prompts.
This output parameter provides the encoded negative conditioning data. It is a tuple containing the encoded negative text and additional metadata. This data is used to guide the AI model away from generating images that contain elements described in the negative prompts.
This output parameter returns the combined positive text prompts (global and local) used for encoding. It helps in verifying the exact text that was used for conditioning.
This output parameter returns the combined negative text prompts (global and local) used for encoding. It helps in verifying the exact text that was used for conditioning.
clip
parameter does not contain a valid CLIP model instance.clip
parameter.© Copyright 2024 RunComfy. All Rights Reserved.