Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates AI-driven interactive conversations with dynamic text responses and customizable prompts.
The IF_ChatPrompt node is designed to facilitate interactive AI-driven conversations by generating text responses based on user inputs and contextual information. This node is particularly useful for AI artists who want to create dynamic and engaging dialogues, leveraging various AI models to produce contextually relevant and creative outputs. The node supports multiple engines and models, allowing for flexibility in the type of responses generated. It also includes features for managing conversation history, customizing prompts, and fine-tuning the response generation process through various parameters. By using this node, you can enhance your AI-driven projects with sophisticated and context-aware text generation capabilities.
This parameter accepts a string and is used to provide contextual information for the conversation. It ensures that the generated responses are relevant to the ongoing dialogue. The context can be forced as an input.
This parameter accepts image inputs, which can be used to influence the generated text based on visual content.
This integer parameter defines the maximum number of tokens (words or subwords) that the generated response can contain. The default value is 2048, with a minimum of 1 and a maximum of 8192. Adjusting this parameter can control the length of the generated text.
This float parameter controls the randomness of the generated text. A lower value (closer to 0.0) makes the output more deterministic, while a higher value (up to 2.0) increases randomness. The default value is 0.7, with a step size of 0.1.
This integer parameter limits the number of highest probability vocabulary tokens to consider during text generation. The default value is 40, with a range from 0 to 100. It helps in controlling the diversity of the generated text.
This float parameter, also known as nucleus sampling, controls the cumulative probability of token selection. The default value is 0.2, with a range from 0.0 to 1.0. It helps in balancing between diversity and coherence in the generated text.
This float parameter penalizes repeated tokens in the generated text to avoid redundancy. The default value is 1.1, with a range from 0.0 to 10.0 and a step size of 0.1.
This string parameter defines the stopping criteria for the text generation. The default value is <|end_of_text|>
, and it can be customized to include specific stop sequences.
This integer parameter sets the seed for random number generation, ensuring reproducibility of the generated text. The default value is 94687328150, with a range from 0 to 0xffffffffffffffff.
This boolean parameter toggles between using a fixed seed and temperature for text generation. When set to True
, it uses the seed; when set to False
, it uses the temperature. The default value is False
.
This parameter allows you to select from a list of predefined embellishment prompts to enhance the generated text.
This parameter allows you to select from a list of predefined style prompts to influence the tone and style of the generated text.
This parameter allows you to select from a list of predefined negative prompts to avoid certain types of content in the generated text.
This boolean parameter controls whether the conversation history should be cleared after each interaction. The default value is True
, with options to clear or keep the history.
This integer parameter defines the number of previous conversation steps to retain in the history. The default value is 10, with a range from 0 to 0xffffffffffffffff.
This boolean parameter determines whether the model should be kept loaded in memory between interactions. The default value is False
, with options to keep or unload the model.
This required string parameter is the main input prompt for the conversation. It supports multiline input and is essential for generating the initial response.
This required string parameter specifies the base IP address for the server hosting the AI model. The default value is set to the node's base IP.
This required string parameter specifies the port number for the server hosting the AI model. The default value is set to the node's port.
This required parameter allows you to select the AI engine to use for text generation. Options include ollama
, kobold
, lms
, textgen
, groq
, openai
, and anthropic
.
This required parameter allows you to select the specific model to use within the chosen engine. The options are dynamically populated based on the selected engine.
This required parameter allows you to select from a list of predefined assistants to guide the conversation style and content.
This output parameter returns the original input prompt, allowing you to track the initial question or statement that initiated the conversation.
This output parameter provides the generated text response from the AI model, based on the input prompt and contextual information.
This output parameter returns any negative prompts that were applied to filter out unwanted content from the generated text.
This output parameter provides the updated context after the interaction, which can be used for subsequent conversation steps to maintain coherence and relevance.
temperature
and top_p
parameters to find the right balance between creativity and coherence in the generated text.clear_history
parameter strategically to either maintain a continuous conversation or start fresh interactions as needed.embellish_prompt
and style_prompt
parameters to customize the tone and style of the generated responses, making them more engaging and aligned with your project's requirements.max_tokens
parameter to accommodate longer contexts.© Copyright 2024 RunComfy. All Rights Reserved.