Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates interaction with Groq LLM API for generating text completions, simplifying request process for AI artists.
The ✨ Groq LLM API node is designed to facilitate seamless interaction with the Groq large language model (LLM) API, enabling you to generate sophisticated text completions based on your input prompts. This node is particularly useful for AI artists who want to leverage advanced language models to create compelling narratives, dialogues, or any text-based content. By providing a structured way to send requests to the Groq API, this node simplifies the process of obtaining high-quality text completions, making it accessible even to those without a technical background. The node handles the intricacies of API communication, including setting up the necessary headers, formatting the request data, and managing retries in case of failures, ensuring a smooth and efficient user experience.
This parameter specifies the language model to be used for generating text completions. Available options include mixtral-8x7b-32768
, llama3-70b-8192
, llama3-8b-8192
, and gemma-7b-it
. The choice of model can significantly impact the quality and style of the generated text, so select the one that best fits your needs.
The preset parameter allows you to choose a predefined prompt template. The default option is Use [system_message] and [user_input]
, but you can also select from other custom templates loaded from the configuration. This helps in standardizing the prompts and ensuring consistent results.
This is a string input where you can provide a system message that sets the context or instructions for the language model. It supports multiline text and defaults to an empty string. The system message helps guide the model's responses to be more aligned with your requirements.
This string input is where you provide the actual user query or prompt that you want the language model to respond to. It also supports multiline text and defaults to an empty string. The user input is the primary driver of the model's output.
This float parameter controls the randomness of the generated text. A lower value (closer to 0.1) makes the output more deterministic, while a higher value (up to 1.0) introduces more creativity and variability. The default value is 0.85, with a minimum of 0.1 and a maximum of 1.0.
This integer parameter sets the maximum number of tokens (words or word pieces) that the model can generate in the response. The default is 1024 tokens, with a minimum of 1 and a maximum of 4096. Adjusting this can help control the length of the generated text.
This float parameter is used for nucleus sampling, which limits the model's token selection to a subset of the most probable tokens. The default value is 1.0, meaning all tokens are considered, but you can set it as low as 0.1 to make the output more focused.
This integer parameter sets the random seed for reproducibility. The default value is 42, and it can be any non-negative integer. Using the same seed ensures that you get the same output for the same input parameters.
This integer parameter specifies the number of times the node will retry the API request in case of failures. The default is 2 retries, with a minimum of 1 and a maximum of 10. This helps in handling transient issues with the API.
This string parameter allows you to specify a stopping sequence for the generated text. If provided, the model will stop generating further tokens once this sequence is encountered. It defaults to an empty string, meaning no stopping sequence is used.
This boolean parameter determines whether the response should be formatted as a JSON object. The default value is False. When set to True, the response will include additional metadata in a structured format.
This output parameter contains the text generated by the language model. It is a string that represents the model's completion based on the provided input parameters. This is the primary output you will use in your projects.
This boolean parameter indicates whether the API request was successful. A value of True means the request was processed without errors, while False indicates that there was an issue.
This string parameter provides the HTTP status code returned by the API. It helps in diagnosing issues by indicating whether the request was successful (e.g., 200 OK
) or if there were errors (e.g., 404 Not Found
).
© Copyright 2024 RunComfy. All Rights Reserved.