ComfyUI > Nodes > ComfyUi-Ollama-YN

ComfyUI Extension: ComfyUi-Ollama-YN

Repo Name

ComfyUi-Ollama-YN

Author
wujm424606 (Account age: 2302 days)
Nodes
View all nodes(4)
Latest Updated
2024-07-12
Github Stars
0.04K

How to Install ComfyUi-Ollama-YN

Install this extension via the ComfyUI Manager by searching for ComfyUi-Ollama-YN
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUi-Ollama-YN in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

ComfyUi-Ollama-YN Description

ComfyUi-Ollama-YN integrates custom ComfyUI Nodes with the Ollama Python client to enhance interaction and provide improved prompt descriptors for stable diffusion.

ComfyUi-Ollama-YN Introduction

ComfyUi-Ollama-YN is an integrated extension designed to enhance the prompt word generation process for AI artists using ComfyUI. By leveraging the capabilities of the ComfyUI-Prompt-MZ and comfyui-ollama projects, this extension ensures that the generated prompts are more consistent with the requirements of stable diffusion. This can be particularly useful for AI artists who need to generate high-quality prompts for their creative projects, making the process more streamlined and efficient.

How ComfyUi-Ollama-YN Works

ComfyUi-Ollama-YN works by integrating various models and functionalities to generate and refine prompt words. It uses pre-trained models to deduce prompt words from images, answer questions, and embellish text prompts. The extension allows users to switch between different models to achieve the best results for their specific needs. By using simple commands, users can load and run different models, making the extension versatile and adaptable to various creative workflows.

ComfyUi-Ollama-YN Features

1. Image Prompt Deduction

This feature allows you to generate prompt words based on an input image. The default model used for this function is the llava model. If you have a better model, you can change it by updating the model name in the settings.

Image Prompt Deduction

2. Simple Q&A

This feature enables a simple question and answer functionality. The default model for this function is the llama3 model. You can switch to a different model by changing the model name in the settings.

Simple Q&A

3. Prompt Embellishment

This feature allows you to refine and embellish your text prompts, making them more suitable for your creative needs.

Prompt Embellishment

4. Contextual Q&A

This feature enables the model to answer questions based on the context provided by previous text inputs.

Contextual Q&A

5. Stable Diffusion Prompt Generation

This feature generates prompt words that closely follow the stable diffusion pattern, making them more effective for use in stable diffusion models.

Stable Diffusion Prompt Generation

ComfyUi-Ollama-YN Models

ComfyUi-Ollama-YN supports various models that can be used for different functionalities. Here are some of the models you can use:

  1. llama3:8b-instruct-q4_K_M - Ideal for instructional prompts.
  2. llama3 - General-purpose model for various tasks.
  3. phi3 - Another versatile model for different applications.
  4. phi3:3.8b-mini-instruct-4k-q4_K_M - A smaller, more efficient model for instructional tasks.
  5. phi3:3.8b-mini-instruct-4k-fp16 - A variant of the phi3 model optimized for performance.
  6. llava - Default model for image prompt deduction. You can switch between these models by updating the model name in the settings.

What's New with ComfyUi-Ollama-YN

Updates

  • 5/15/2024: Added keep_alive support. This feature allows the model to remain in video memory for a specified duration, improving performance for repeated tasks.
  • 7/15/2024: Added support for extra model names, allowing users to add and switch between custom models easily. These updates enhance the flexibility and performance of the extension, making it more user-friendly and efficient for AI artists.

Troubleshooting ComfyUi-Ollama-YN

Common Issues and Solutions

  1. Error: No model found
  • Solution: Ensure that you have installed the required models as described in the installation section. Use commands like ollama run llama3:8b-instruct-q4_K_M to load the models.
  1. Model not loading
  • Solution: Check if the model name is correctly specified in the settings. Ensure that the model files are in the correct directory.
  1. Performance issues
  • Solution: Use the keep_alive feature to keep models in video memory for a specified duration, reducing load times for repeated tasks.

Frequently Asked Questions

  • How do I switch models?
  • Update the model name in the settings to switch to a different model.
  • Can I use custom models?
  • Yes, you can add custom models by specifying their names in the settings.

Learn More about ComfyUi-Ollama-YN

For more information, tutorials, and community support, you can visit the following resources:

ComfyUi-Ollama-YN Related Nodes

RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.