ComfyUI > Nodes > ComfyUI-IF_AI_tools > IF Chat Prompt👨‍💻

ComfyUI Node: IF Chat Prompt👨‍💻

Class Name

IF_ChatPrompt

Category
ImpactFrames💥🎞️
Author
if-ai (Account age: 2860days)
Extension
ComfyUI-IF_AI_tools
Latest Updated
2024-06-10
Github Stars
0.36K

How to Install ComfyUI-IF_AI_tools

Install this extension via the ComfyUI Manager by searching for ComfyUI-IF_AI_tools
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-IF_AI_tools in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

IF Chat Prompt👨‍💻 Description

Facilitates AI-driven interactive conversations with dynamic text responses and customizable prompts.

IF Chat Prompt👨‍💻:

The IF_ChatPrompt node is designed to facilitate interactive AI-driven conversations by generating text responses based on user inputs and contextual information. This node is particularly useful for AI artists who want to create dynamic and engaging dialogues, leveraging various AI models to produce contextually relevant and creative outputs. The node supports multiple engines and models, allowing for flexibility in the type of responses generated. It also includes features for managing conversation history, customizing prompts, and fine-tuning the response generation process through various parameters. By using this node, you can enhance your AI-driven projects with sophisticated and context-aware text generation capabilities.

IF Chat Prompt👨‍💻 Input Parameters:

context

This parameter accepts a string and is used to provide contextual information for the conversation. It ensures that the generated responses are relevant to the ongoing dialogue. The context can be forced as an input.

images

This parameter accepts image inputs, which can be used to influence the generated text based on visual content.

max_tokens

This integer parameter defines the maximum number of tokens (words or subwords) that the generated response can contain. The default value is 2048, with a minimum of 1 and a maximum of 8192. Adjusting this parameter can control the length of the generated text.

temperature

This float parameter controls the randomness of the generated text. A lower value (closer to 0.0) makes the output more deterministic, while a higher value (up to 2.0) increases randomness. The default value is 0.7, with a step size of 0.1.

top_k

This integer parameter limits the number of highest probability vocabulary tokens to consider during text generation. The default value is 40, with a range from 0 to 100. It helps in controlling the diversity of the generated text.

top_p

This float parameter, also known as nucleus sampling, controls the cumulative probability of token selection. The default value is 0.2, with a range from 0.0 to 1.0. It helps in balancing between diversity and coherence in the generated text.

repeat_penalty

This float parameter penalizes repeated tokens in the generated text to avoid redundancy. The default value is 1.1, with a range from 0.0 to 10.0 and a step size of 0.1.

stop

This string parameter defines the stopping criteria for the text generation. The default value is <|end_of_text|>, and it can be customized to include specific stop sequences.

seed

This integer parameter sets the seed for random number generation, ensuring reproducibility of the generated text. The default value is 94687328150, with a range from 0 to 0xffffffffffffffff.

random

This boolean parameter toggles between using a fixed seed and temperature for text generation. When set to True, it uses the seed; when set to False, it uses the temperature. The default value is False.

embellish_prompt

This parameter allows you to select from a list of predefined embellishment prompts to enhance the generated text.

style_prompt

This parameter allows you to select from a list of predefined style prompts to influence the tone and style of the generated text.

neg_prompt

This parameter allows you to select from a list of predefined negative prompts to avoid certain types of content in the generated text.

clear_history

This boolean parameter controls whether the conversation history should be cleared after each interaction. The default value is True, with options to clear or keep the history.

history_steps

This integer parameter defines the number of previous conversation steps to retain in the history. The default value is 10, with a range from 0 to 0xffffffffffffffff.

keep_alive

This boolean parameter determines whether the model should be kept loaded in memory between interactions. The default value is False, with options to keep or unload the model.

prompt

This required string parameter is the main input prompt for the conversation. It supports multiline input and is essential for generating the initial response.

base_ip

This required string parameter specifies the base IP address for the server hosting the AI model. The default value is set to the node's base IP.

port

This required string parameter specifies the port number for the server hosting the AI model. The default value is set to the node's port.

engine

This required parameter allows you to select the AI engine to use for text generation. Options include ollama, kobold, lms, textgen, groq, openai, and anthropic.

selected_model

This required parameter allows you to select the specific model to use within the chosen engine. The options are dynamically populated based on the selected engine.

assistant

This required parameter allows you to select from a list of predefined assistants to guide the conversation style and content.

IF Chat Prompt👨‍💻 Output Parameters:

Question

This output parameter returns the original input prompt, allowing you to track the initial question or statement that initiated the conversation.

Response

This output parameter provides the generated text response from the AI model, based on the input prompt and contextual information.

Negative

This output parameter returns any negative prompts that were applied to filter out unwanted content from the generated text.

Context

This output parameter provides the updated context after the interaction, which can be used for subsequent conversation steps to maintain coherence and relevance.

IF Chat Prompt👨‍💻 Usage Tips:

  • Experiment with the temperature and top_p parameters to find the right balance between creativity and coherence in the generated text.
  • Use the clear_history parameter strategically to either maintain a continuous conversation or start fresh interactions as needed.
  • Leverage the embellish_prompt and style_prompt parameters to customize the tone and style of the generated responses, making them more engaging and aligned with your project's requirements.

IF Chat Prompt👨‍💻 Common Errors and Solutions:

"Invalid IP address or port"

  • Explanation: The provided base IP address or port number is incorrect or unreachable.
  • Solution: Verify and correct the base IP address and port number to ensure they match the server hosting the AI model.

"Model not found"

  • Explanation: The selected model is not available in the chosen engine.
  • Solution: Ensure that the model name is correctly specified and that it is available in the selected engine. Check the server for available models.

"Context length exceeded"

  • Explanation: The provided context exceeds the maximum allowed length.
  • Solution: Reduce the length of the context or adjust the max_tokens parameter to accommodate longer contexts.

"Invalid parameter value"

  • Explanation: One or more input parameters have values outside the allowed range.
  • Solution: Check the parameter values and ensure they fall within the specified minimum and maximum limits. Adjust as necessary.

IF Chat Prompt👨‍💻 Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-IF_AI_tools
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.