ComfyUI Node: Ollama API

Class Name

OllamaAPI

Category
AI API/Ollama
Author
al-swaiti (Account age: 1187days)
Extension
GeminiOllama ComfyUI Extension
Latest Updated
2025-03-06
Github Stars
0.04K

How to Install GeminiOllama ComfyUI Extension

Install this extension via the ComfyUI Manager by searching for GeminiOllama ComfyUI Extension
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter GeminiOllama ComfyUI Extension in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Ollama API Description

Facilitates text generation with Ollama AI models for creative projects, including image inputs for enhanced content creation.

Ollama API:

The OllamaAPI node is designed to facilitate interaction with the Ollama AI models, providing a seamless way to generate text content based on user-defined prompts. This node is particularly beneficial for AI artists and creators who wish to leverage advanced AI models to enhance their creative projects. By connecting to the Ollama API, this node allows you to specify a model and input a prompt, which the model then uses to generate a textual response. The node is capable of handling additional inputs such as images, which can be encoded and sent along with the prompt to influence the generated content. This flexibility makes the OllamaAPI node a powerful tool for generating creative and contextually relevant text outputs, enhancing the creative process with AI-driven insights.

Ollama API Input Parameters:

prompt

The prompt parameter is a string input that serves as the primary text input for the Ollama model. It is the question or statement you want the model to respond to. The default value is "What is the meaning of life?", and it supports multiline text, allowing for complex and detailed prompts. This parameter is crucial as it directly influences the content and relevance of the generated output.

ollama_model

The ollama_model parameter allows you to select from a list of available Ollama models. This selection determines which model will process the input prompt. The models are fetched dynamically from the Ollama API, and if the fetch fails, it defaults to using the "llama2" model. Choosing the right model can significantly impact the style and accuracy of the generated content.

keep_alive

The keep_alive parameter is an integer that specifies the duration, in minutes, for which the connection to the Ollama API should be maintained. It ranges from 0 to 60, with a default value of 0. This parameter is useful for managing API session persistence, which can be beneficial in scenarios where multiple requests are made in quick succession.

image

The image parameter is an optional input that allows you to include an image as part of the request. If provided, the image is converted to a base64-encoded string and sent along with the prompt. This can be used to provide additional context or influence the text generation process based on visual input.

Ollama API Output Parameters:

text

The text output parameter is a string that contains the generated response from the Ollama model. This output is the result of processing the input prompt (and optionally, the image) through the selected model. The generated text can be used in various creative applications, providing insights, narratives, or other forms of textual content that align with the input parameters.

Ollama API Usage Tips:

  • Ensure that your prompt is clear and specific to get the most relevant and accurate responses from the model.
  • Experiment with different Ollama models to find the one that best suits your creative needs, as each model may have unique strengths and styles.
  • Use the keep_alive parameter to maintain a persistent connection if you plan to make multiple requests in a short period, which can improve performance by reducing connection overhead.

Ollama API Common Errors and Solutions:

Failed to fetch Ollama models. Status code: <status_code>

  • Explanation: This error occurs when the request to fetch available Ollama models from the API fails, possibly due to network issues or incorrect API URL.
  • Solution: Check your network connection and ensure that the Ollama URL in the configuration is correct. If the problem persists, consider using the default "llama2" model as a fallback.

Error fetching Ollama models: <error_message>

  • Explanation: This error indicates an exception occurred while trying to retrieve the list of models, which could be due to a misconfiguration or an unexpected server response.
  • Solution: Verify the configuration file for correct API settings and ensure the server is accessible. If necessary, consult the server logs for more detailed error information.

Ollama API Related Nodes

Go back to the extension to check out more related nodes.
GeminiOllama ComfyUI Extension
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.