Visit ComfyUI Online for ready-to-use ComfyUI environment
Encode text prompts into CLIP model embeddings for image generation guidance using advanced parsing techniques and dynamic text modification.
The CLIPTextEncode (NSP) node is designed to encode text prompts into embeddings using a CLIP model, which can then be used to guide diffusion models in generating specific images. This node leverages advanced text parsing techniques, such as "Noodle Soup Prompts" and wildcards, to dynamically modify and enhance the input text based on a given seed. By transforming the text into a format that the CLIP model can process, it ensures that the resulting embeddings are highly relevant and tailored to the desired output. This node is particularly useful for AI artists looking to create more nuanced and contextually rich image generations by providing a sophisticated method for text-to-image conditioning.
This parameter determines the text parsing method to be used. The available options are "Noodle Soup Prompts" and "Wildcards". "Noodle Soup Prompts" allows for dynamic and complex text modifications, while "Wildcards" enable the replacement of specific placeholders within the text. The choice of mode impacts how the text is processed and ultimately encoded. Default value is "Noodle Soup Prompts".
This is a string parameter used as a key for parsing text when the mode is set to "Noodle Soup Prompts". It defines the delimiter for identifying sections of the text that need to be dynamically modified. The default value is __
, and it should not be multiline.
An integer parameter that sets the seed for randomization in text parsing. This ensures reproducibility of the text modifications. The seed value can range from 0 to 0xffffffffffffffff, with a default value of 0. Setting a specific seed allows for consistent results across different runs.
This is the main text input that you want to encode. It supports multiline input, allowing for complex and detailed prompts. The text will be parsed and modified based on the selected mode and noodle_key before being encoded by the CLIP model.
This parameter specifies the CLIP model to be used for encoding the text. The CLIP model is responsible for converting the processed text into embeddings that can guide the diffusion model.
This output contains the conditioning data, which is the embedded text used to guide the diffusion model. It is a crucial component for ensuring that the generated images align with the input text prompt.
This output provides the text after it has been parsed and modified according to the selected mode and noodle_key. It allows you to see the final version of the text that was encoded by the CLIP model.
This output returns the original text input without any modifications. It serves as a reference to compare against the parsed_text and understand the changes made during the parsing process.
© Copyright 2024 RunComfy. All Rights Reserved.