ComfyUI  >  Nodes  >  ComfyUI-mnemic-nodes >  ✨ Groq LLM API

ComfyUI Node: ✨ Groq LLM API

Class Name

✨ Groq LLM API

Category
⚡ MNeMiC Nodes
Author
MNeMoNiCuZ (Account age: 1644 days)
Extension
ComfyUI-mnemic-nodes
Latest Updated
8/2/2024
Github Stars
0.0K

How to Install ComfyUI-mnemic-nodes

Install this extension via the ComfyUI Manager by searching for  ComfyUI-mnemic-nodes
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-mnemic-nodes in the search bar
After installation, click the  Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

✨ Groq LLM API Description

Facilitates interaction with Groq LLM API for generating text completions, simplifying request process for AI artists.

✨ Groq LLM API:

The ✨ Groq LLM API node is designed to facilitate seamless interaction with the Groq large language model (LLM) API, enabling you to generate sophisticated text completions based on your input prompts. This node is particularly useful for AI artists who want to leverage advanced language models to create compelling narratives, dialogues, or any text-based content. By providing a structured way to send requests to the Groq API, this node simplifies the process of obtaining high-quality text completions, making it accessible even to those without a technical background. The node handles the intricacies of API communication, including setting up the necessary headers, formatting the request data, and managing retries in case of failures, ensuring a smooth and efficient user experience.

✨ Groq LLM API Input Parameters:

model

This parameter specifies the language model to be used for generating text completions. Available options include mixtral-8x7b-32768, llama3-70b-8192, llama3-8b-8192, and gemma-7b-it. The choice of model can significantly impact the quality and style of the generated text, so select the one that best fits your needs.

preset

The preset parameter allows you to choose a predefined prompt template. The default option is Use [system_message] and [user_input], but you can also select from other custom templates loaded from the configuration. This helps in standardizing the prompts and ensuring consistent results.

system_message

This is a string input where you can provide a system message that sets the context or instructions for the language model. It supports multiline text and defaults to an empty string. The system message helps guide the model's responses to be more aligned with your requirements.

user_input

This string input is where you provide the actual user query or prompt that you want the language model to respond to. It also supports multiline text and defaults to an empty string. The user input is the primary driver of the model's output.

temperature

This float parameter controls the randomness of the generated text. A lower value (closer to 0.1) makes the output more deterministic, while a higher value (up to 1.0) introduces more creativity and variability. The default value is 0.85, with a minimum of 0.1 and a maximum of 1.0.

max_tokens

This integer parameter sets the maximum number of tokens (words or word pieces) that the model can generate in the response. The default is 1024 tokens, with a minimum of 1 and a maximum of 4096. Adjusting this can help control the length of the generated text.

top_p

This float parameter is used for nucleus sampling, which limits the model's token selection to a subset of the most probable tokens. The default value is 1.0, meaning all tokens are considered, but you can set it as low as 0.1 to make the output more focused.

seed

This integer parameter sets the random seed for reproducibility. The default value is 42, and it can be any non-negative integer. Using the same seed ensures that you get the same output for the same input parameters.

max_retries

This integer parameter specifies the number of times the node will retry the API request in case of failures. The default is 2 retries, with a minimum of 1 and a maximum of 10. This helps in handling transient issues with the API.

stop

This string parameter allows you to specify a stopping sequence for the generated text. If provided, the model will stop generating further tokens once this sequence is encountered. It defaults to an empty string, meaning no stopping sequence is used.

json_mode

This boolean parameter determines whether the response should be formatted as a JSON object. The default value is False. When set to True, the response will include additional metadata in a structured format.

✨ Groq LLM API Output Parameters:

api_response

This output parameter contains the text generated by the language model. It is a string that represents the model's completion based on the provided input parameters. This is the primary output you will use in your projects.

success

This boolean parameter indicates whether the API request was successful. A value of True means the request was processed without errors, while False indicates that there was an issue.

status_code

This string parameter provides the HTTP status code returned by the API. It helps in diagnosing issues by indicating whether the request was successful (e.g., 200 OK) or if there were errors (e.g., 404 Not Found).

✨ Groq LLM API Usage Tips:

  • Experiment with different models to find the one that best suits your creative needs. Each model has its own strengths and may produce different styles of text.
  • Use the temperature and top_p parameters to fine-tune the creativity and focus of the generated text. Lower values make the output more predictable, while higher values introduce more variability.
  • Leverage the system_message parameter to provide clear instructions or context to the model, which can help in generating more relevant and coherent responses.
  • Utilize the max_tokens parameter to control the length of the generated text, especially if you need concise outputs.
  • If you encounter issues with the API, increase the max_retries parameter to improve the chances of a successful request.

✨ Groq LLM API Common Errors and Solutions:

ERROR

  • Explanation: This error indicates that the API request failed due to an unspecified issue.
  • Solution: Check the status_code output parameter for more details on the error. Ensure that your API key is valid and that you have a stable internet connection. If the issue persists, try increasing the max_retries parameter.

200 OK but no content

  • Explanation: The API request was successful, but the response did not contain any valid content.
  • Solution: Verify that your input parameters, especially the user_input and system_message, are correctly set. Adjust the prompt to ensure it provides enough context for the model to generate a meaningful response.

Error parsing JSON response.

  • Explanation: The API response could not be parsed as JSON, possibly due to a malformed response.
  • Solution: Ensure that the json_mode parameter is set correctly. If the issue persists, contact Groq support for assistance.

Failed after all retries

  • Explanation: The API request failed after the specified number of retries.
  • Solution: Increase the max_retries parameter and ensure that your network connection is stable. If the problem continues, check for any service outages or contact Groq support.

✨ Groq LLM API Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-mnemic-nodes
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.