ComfyUI > Nodes > ComfyUI-Ollama-Describer > 🦙 Ollama Text Describer 🦙

ComfyUI Node: 🦙 Ollama Text Describer 🦙

Class Name

OllamaTextDescriber

Category
Ollama
Author
alisson-anjos (Account age: 616days)
Extension
ComfyUI-Ollama-Describer
Latest Updated
2024-06-29
Github Stars
0.04K

How to Install ComfyUI-Ollama-Describer

Install this extension via the ComfyUI Manager by searching for ComfyUI-Ollama-Describer
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-Ollama-Describer in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

🦙 Ollama Text Describer 🦙 Description

Generate detailed text descriptions using advanced AI models for AI artists to enhance narrative quality.

🦙 Ollama Text Describer 🦙:

The OllamaTextDescriber node is designed to generate descriptive text based on a given prompt, leveraging advanced AI models. This node is particularly useful for AI artists who need to create detailed textual descriptions for their projects, whether for storytelling, character development, or scene setting. By utilizing this node, you can input a prompt and receive a coherent and contextually relevant description, enhancing the narrative quality of your work. The node's primary function is to interact with a specified model to produce text that aligns with the provided parameters, ensuring that the output is tailored to your specific needs.

🦙 Ollama Text Describer 🦙 Input Parameters:

model

This parameter specifies the AI model to be used for generating the text description. The choice of model can significantly impact the style and quality of the output. Ensure you select a model that aligns with your desired narrative tone and complexity.

custom_model

If you have a custom-trained model, you can specify it here. This allows for more personalized and context-specific descriptions, leveraging your unique dataset to produce tailored outputs.

api_host

The API host parameter defines the endpoint for the model's API. This is crucial for establishing a connection to the model server, ensuring that your requests are directed to the correct location.

timeout

This parameter sets the maximum time (in seconds) the node will wait for a response from the model. A higher timeout value can be useful for complex prompts that require more processing time, while a lower value can help in scenarios where quick responses are needed. Default value is typically around 30 seconds.

temperature

Temperature controls the randomness of the text generation. A lower temperature results in more deterministic and focused outputs, while a higher temperature allows for more creativity and variation. Values typically range from 0.1 to 1.0, with 0.7 being a common default.

top_k

This parameter limits the sampling pool to the top K most likely next words. A lower value makes the output more focused and deterministic, while a higher value introduces more diversity. Common values range from 1 to 50.

top_p

Top-p sampling, also known as nucleus sampling, considers the smallest set of words whose cumulative probability exceeds the specified threshold p. This helps in balancing between focus and creativity. Values range from 0.1 to 1.0, with 0.9 being a typical default.

repeat_penalty

This parameter penalizes the model for repeating the same words or phrases, encouraging more varied and interesting outputs. A value greater than 1.0 increases the penalty, with 1.2 being a common default.

seed_number

The seed number ensures reproducibility of the text generation. By setting a specific seed, you can get the same output for the same input parameters, which is useful for debugging and consistency.

max_tokens

This parameter sets the maximum number of tokens (words or word pieces) in the generated text. It helps in controlling the length of the output, with typical values ranging from 50 to 200 tokens.

keep_model_alive

This boolean parameter determines whether the model should remain active between requests. Keeping the model alive can reduce latency for subsequent requests but may consume more resources.

prompt

The prompt is the initial text input that guides the model in generating the description. It should be clear and specific to get the most relevant and coherent output.

system_context

This parameter provides additional context or background information to the model, helping it generate more accurate and contextually appropriate descriptions.

🦙 Ollama Text Describer 🦙 Output Parameters:

result

The result parameter contains the generated text description. This output is a string that reflects the model's interpretation of the provided prompt and input parameters. It is the primary output of the node and can be used directly in your projects for various narrative and descriptive purposes.

🦙 Ollama Text Describer 🦙 Usage Tips:

  • Experiment with different temperature settings to find the right balance between creativity and coherence for your specific project.
  • Use the seed_number parameter to ensure consistency in outputs when fine-tuning your prompts.
  • Adjust the max_tokens parameter to control the length of the generated descriptions, especially if you need concise or detailed outputs.
  • Provide a clear and specific prompt to guide the model effectively and achieve the most relevant descriptions.

🦙 Ollama Text Describer 🦙 Common Errors and Solutions:

"TimeoutError: Request timed out"

  • Explanation: The request to the model server took longer than the specified timeout value.
  • Solution: Increase the timeout parameter value to allow more time for the model to process the request.

"ConnectionError: Unable to connect to API host"

  • Explanation: The node could not establish a connection to the specified API host.
  • Solution: Verify the api_host parameter and ensure that the server is running and accessible.

"ValueError: Invalid temperature value"

  • Explanation: The temperature parameter is set to a value outside the acceptable range.
  • Solution: Ensure the temperature is set between 0.1 and 1.0.

"ModelError: Custom model not found"

  • Explanation: The specified custom model could not be located.
  • Solution: Check the custom_model parameter for accuracy and ensure the model is correctly uploaded and accessible.

"InvalidPromptError: Prompt is empty or too vague"

  • Explanation: The provided prompt is either empty or lacks sufficient detail to guide the model.
  • Solution: Provide a clear and specific prompt to help the model generate a relevant description.

🦙 Ollama Text Describer 🦙 Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-Ollama-Describer
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.