Install this extension via the ComfyUI Manager by searching
for InstanceDiffusion Nodes
1. Click the Manager button in the main menu
2. Select Custom Nodes Manager button
3. Enter InstanceDiffusion Nodes in the search bar
After installation, click the Restart button to
restart ComfyUI. Then, manually
refresh your browser to clear the cache and access
the updated list of nodes.
Visit
ComfyUI Online
for ready-to-use ComfyUI environment
InstanceDiffusion Nodes enable multi-object prompting within ComfyUI, allowing users to manage and manipulate multiple objects in a single diffusion process efficiently.
InstanceDiffusion Nodes Introduction
ComfyUI-InstanceDiffusion is an extension designed to enhance the capabilities of ComfyUI by integrating InstanceDiffusion. This extension allows AI artists to have precise control over individual elements within an image, enabling the creation of complex and detailed compositions. With ComfyUI-InstanceDiffusion, you can specify the location and attributes of different instances in your image using various methods such as points, scribbles, bounding boxes, and segmentation masks. This level of control can significantly improve the quality and specificity of generated images, making it a powerful tool for artists looking to push the boundaries of AI-generated art.
How InstanceDiffusion Nodes Works
At its core, ComfyUI-InstanceDiffusion works by adding instance-level control to text-to-image diffusion models. Imagine you are an artist who wants to create a scene with multiple objects, each with specific attributes and positions. Instead of relying solely on a global text prompt, you can now provide detailed instructions for each object. For example, you can specify that a cat should be sitting on the left side of the image, a dog should be on the right, and a tree should be in the background. ComfyUI-InstanceDiffusion takes these instructions and integrates them into the image generation process, ensuring that each object is placed and rendered according to your specifications.
InstanceDiffusion Nodes Features
Instance-Level Control
Points: Specify the exact location of an instance using a single point.
Scribbles: Draw free-form lines to indicate the shape and position of an instance.
Bounding Boxes: Define the area where an instance should appear using rectangular boxes.
Segmentation Masks: Use intricate masks to specify the exact shape and position of an instance.
Customization Options
Guidance Scale: Adjust the influence of the text prompt on the generated image.
Alpha: Control the fraction of timesteps using instance-level conditions.
Multi-Instance Sampler (MIS): Reduce information leakage between instances to improve image quality.
Cascade Strength: Enhance image quality using the SDXL refiner.
Integration with ComfyUI
Nodes: Use specialized nodes within ComfyUI to apply instance-level controls.
Example Workflows: Access example workflows to see how different settings and features can be used to achieve various effects.
InstanceDiffusion Nodes Models
ComfyUI-InstanceDiffusion supports multiple models, each designed for specific tasks and conditions. Here are the available models and their use cases:
fusers.ckpt: Used for general instance-level control.
positionnet.ckpt: Focuses on accurately positioning instances within the image.
scaleu.ckpt: Enhances the ability to respect instance-conditioning by rescaling feature maps.
These models can be downloaded from Hugging Face and placed in the appropriate directories within ComfyUI.
What's New with InstanceDiffusion Nodes
Recent Updates
02/25/2024: InstanceDiffusion is now ported into ComfyUI. Check out some cool video demos!
02/21/2024: Support for flash attention, reducing memory usage by more than half.
02/19/2024: Added PiM evaluation for scribble-/point-based image generation.
02/10/2024: Added model evaluation on attribute binding.
02/09/2024: Added model evaluation using the MSCOCO dataset.
02/05/2024: Initial commit.
These updates bring significant improvements in performance, usability, and functionality, making it easier for AI artists to create high-quality, detailed images.
Troubleshooting InstanceDiffusion Nodes
Common Issues and Solutions
Models Not Loading:
Ensure that the models are placed in the correct directories: ComfyUI/models/instance_models/.
Verify that the model files are not corrupted and are fully downloaded.
Unexpected Image Outputs:
Check the guidance scale and alpha settings. Adjusting these parameters can significantly affect the final image.
Ensure that the instance-level conditions (points, scribbles, boxes, masks) are correctly specified.
High Memory Usage:
Enable flash attention to reduce memory usage.
Lower the resolution of the input images or reduce the number of instances.
Frequently Asked Questions
Q: Can I use multiple instance types in one image?
A: Yes, you can combine points, scribbles, bounding boxes, and masks in a single image.
Q: How do I improve the quality of the generated images?
A: Adjust the guidance scale, alpha, and cascade strength settings. Using the SDXL refiner can also enhance image quality.
Learn More about InstanceDiffusion Nodes
For additional resources, tutorials, and community support, check out the following links:
Video Helper Suite for video-related workflows.
These resources provide valuable information and tools to help you get the most out of ComfyUI-InstanceDiffusion.