ComfyUI_IPAdapter_plus integrates IPAdapter models into ComfyUI, adapting code from the original IPAdapter repository and laksjdjf's implementation to align with ComfyUI's design principles.
Deforum Nodes offer an official animation pipeline for creating frame-by-frame generative motion art, enabling unique and dynamic visual content creation.
mape's helpers enhance ComfyUI with multi-monitor image preview, variable assignment/wireless nodes, prompt tweaking, command palette, pinned favorite nodes, node navigation, fuzzy search, node time tracking, and error management.
Primere nodes for ComfyUI offer diverse utility nodes, including Inputs (prompt, styles, dynamic, merger), Outputs (style pile), Dashboard (selectors, loader, switch), Networks (LORA, Embedding, Hypernetwork), and Visuals (visual selectors).
ComfyUI-IDM-VTON [WIP] is an adaptation of IDM-VTON for virtual try-on, enabling users to visualize clothing on different body types using the ComfyUI interface.
ComfyUI-Prompt-MZ leverages llama.cpp to generate nodes for tasks related to prompt word processing within the ComfyUI framework, enhancing efficiency and functionality in prompt management.
ComfyUI_StoryDiffusion is a specialized node within ComfyUI designed to facilitate the creation and manipulation of story-based content. It integrates seamlessly into the ComfyUI framework, enhancing narrative development capabilities.
Comfyui-MusePose is an image-to-video generation framework that creates virtual human animations based on control signals like pose. Users must manually download the necessary weights from Hugging Face for optimal functionality.
comfyui_LLM_party is a set of block-based LLM agent node libraries for ComfyUI, enabling users to efficiently construct and integrate LLM workflows into existing SD workflows.
ComfyUI-Hallo is a custom node for ComfyUI, designed to integrate with the Hallo project from Fudan Generative Vision. It enhances ComfyUI's functionality by incorporating Hallo's advanced generative vision capabilities.
ComfyUI-IC-Light-Native integrates the IC-Light framework directly into ComfyUI, enhancing its functionality with native support for IC-Light's features.
ComfyUI-Inpaint-CropAndStitch features two nodes: 'Inpaint Crop' for cropping images before sampling using a mask, expand pixels, or expand factor, and 'Inpaint Stitch' for seamlessly reintegrating the inpainted image into the original without affecting unmasked areas.
ComfyUI-mxToolkit enhances ComfyUI with nodes for seed randomization, pausing generation, and saving values. It includes slider nodes for input control and an alternative Reroute node, streamlining the user experience.
ComfyUI_DanTagGen is a ComfyUI node for Kohaku's DanTagGen Demo, designed to generate tags efficiently. It integrates seamlessly with ComfyUI, enhancing tag generation capabilities for various applications.
Prompt Injection Node for ComfyUI enables precise control over image generation by injecting specific prompts into Stable Diffusion UNet blocks, particularly MID0 and MID1. It offers three node variations for flexible integration, customizable learning rates for targeted adjustments, and potential for a 'Mix of Experts' approach by dynamically swapping blocks based on prompt content.
ComfyUI-BiRefNet integrates the Bilateral Reference Network, which excels in multi Salient Object Segmentation, into ComfyUI nodes, simplifying the use of this state-of-the-art model for users.
Crystools enhances ComfyUI with features like resource monitoring, progress tracking, and metadata viewing. It allows image and JSON comparisons, displays values, and offers improved nodes for loading/saving images and previews, revealing hidden data seamlessly.
ComfyUI-Florence2 integrates Microsoft's Florence2 vision model into ComfyUI, enabling functionalities like captioning, object detection, and segmentation.
ComfyUI-J offers a distinct set of nodes based on Diffusers, enhancing model import, weighted prompts, inpainting, reference-only modes, and controlnet functionalities, differing from Comfy's KSampler series.
ComfyUI-LuminaWrapper integrates Lumina models into ComfyUI, providing specialized wrapper nodes to enhance functionality and streamline model usage within the ComfyUI framework.
ComfyUI-MagickWand integrates ImageMagick's powerful image editing and manipulation capabilities into ComfyUI via the wandpy library, enhancing digital image processing. Note: ImageMagick must be installed manually.
ComfyUI-Tara-LLM-Integration is a robust node for ComfyUI that incorporates Large Language Models (LLMs) to streamline and automate workflows, enabling the creation of intelligent processes for content generation, API key management, and seamless LLM integration.
ComfyUI-UVR5 is a custom code extension for the Ultimate Vocal Remover GUI (UVR5) designed to efficiently separate vocals from background music, enhancing audio processing capabilities.
ComfyUI-layerdiffuse (layerdiffusion) integrates LayerDiffusion into ComfyUI, enhancing image processing capabilities by enabling advanced diffusion techniques for improved visual outputs.
ComfyUI_MagicClothing integrates garment and prompt functionalities into ComfyUI, enhancing the user interface with advanced clothing design and customization features.
ComfyUI ProPainter Nodes is a custom node implementation of the ProPainter framework designed for video inpainting, enhancing video editing capabilities within the ComfyUI environment.
BrushNet for ComfyUI enables image inpainting through custom nodes, utilizing a decomposed dual-branch diffusion model for seamless plug-and-play functionality.
ComfyUI-DepthAnythingV2 integrates DepthAnythingV2 models into ComfyUI, enabling automatic model downloads to enhance depth-related functionalities within the ComfyUI framework.
ComfyUI-IC-Light provides native nodes for the ComfyUI interface, enhancing its functionality and integration capabilities.
ComfyUI Impact Pack enhances facial details with detector and detailer nodes, and includes an iterative upscaler for improved image quality.
Phi-3-mini in ComfyUI integrates nodes like Phi3mini_4k_ModelLoader_Zho, Phi3mini_4k_Zho, and Phi3mini_4k_Chat_Zho, enhancing model loading, processing, and chat functionalities within the ComfyUI framework.
ComfyUI-ToonCrafter integrates ToonCrafter with ComfyUI, enabling generative keyframe animation. It supports animation rendering and prediction in Blender, enhancing animation workflows.
ComfyUI Ollama integrates custom nodes into ComfyUI workflows, leveraging the Ollama Python client to seamlessly incorporate the capabilities of large language models (LLMs).
ComfyUI's ControlNet Auxiliary Preprocessors reworks the original preprocessors using ControlNet auxiliary models, replacing most v1 preprocessors with v1.1 versions. It maintains compatibility with old workflows but advises against using the controlnet preprocessor to avoid conflicts.
AnimateDiff Evolved enhances ComfyUI by integrating improved motion models from sd-webui-animatediff. Users can download and use original or finetuned models, placing them in the specified directory for seamless workflow sharing.
V-Express: Conditional Dropout for Progressive Training of Portrait Video Generation balances control signals like pose, image, and audio in portrait video generation. It uses progressive dropout to enhance weak signals, ensuring effective convergence and controlled generation.
ComfyUI-Flowty-TripoSR integrates the TripoSR model into ComfyUI, enabling fast feedforward 3D reconstruction from a single image. Developed by Tripo AI and Stability AI, it simplifies creating 3D models directly within ComfyUI.
Perturbed-Attention Guidance for ComfyUI enhances image generation by manipulating attention maps, allowing for refined control over visual outputs. This extension adjusts attention mechanisms to improve detail and coherence in generated images.
ComfyUI-AnimateAnyone-Evolved enhances AnimateAnyone by using image sequences and reference images to create stylized videos. It aims for pose-to-video results at 1+ FPS on GPUs like RTX 3080 or better.
ComfyUI-DragAnything is an extension for ComfyUI that enables users to intuitively manipulate UI elements by dragging and dropping. It enhances user interaction by allowing seamless repositioning and customization of interface components.
ComfyUI-FLATTEN provides nodes for integrating optical flow-guided attention, enabling consistent text-to-video editing within the ComfyUI framework.
ComfyUI-Flowty-CRM is a custom node for ComfyUI that integrates Convolutional Reconstruction Models, enabling high-fidelity feed-forward single image-to-3D generation directly within the interface.
ComfyUI-Flowty-LDSR is a custom node for ComfyUI that integrates Latent Diffusion Super Resolution (LDSR) models, enhancing image resolution capabilities within the interface.
ComfyUI-Long-CLIP enhances the comfyui by supporting the replacement of clip-l, specifically for SD1.5. It uses the SeaArtLongClip module to expand token length from 77 to 248, improving model performance.
ComfyUI-MotionCtrl-SVD enhances ComfyUI by integrating MotionCtrl-SVD weights, enabling advanced motion control capabilities. Users need to download the weights and place them in the ComfyUI models checkpoints directory.
ComfyUI-MotionCtrl enables motion control in ComfyUI by integrating specific weights. Users must download the MotionCtrl weights and place them in the ComfyUI models checkpoints directory to activate this functionality.
ComfyUI OOTDiffusion is a custom node for ComfyUI that seamlessly integrates the OOTDiffusion functionality, enhancing the user interface with advanced diffusion capabilities.
ComfyUI-Qwen-2 integrates Qwen-2 models into ComfyUI, enabling advanced AI functionalities. It supports seamless model loading, configuration, and execution within the ComfyUI interface, enhancing user experience and efficiency.
ComfyUI-fastblend enhances ComfyUI with nodes for video-to-video processing, including image rebatching and openpose integration, optimizing video editing workflows.
Face Analysis for ComfyUI leverages DLib to compute Euclidean and Cosine distances between faces, requiring the installation of the Shape Predictor and Face Recognition model from the Install models menu.
ComfyUI InstantID (Native Support) integrates InstantID directly into ComfyUI without using diffusers, offering a unique native implementation. Currently in beta, it seeks user feedback for further refinement.
ComfyUI_omost integrates the Omost framework into ComfyUI, enabling advanced regional prompt functionalities. Note: ComfyUI_densediffusion installation is required for this node to operate.
PuLID_ComfyUI is a native implementation of the PuLID framework within ComfyUI, designed to enhance user interface functionality and integration. It streamlines UI processes, offering improved performance and user experience.
ComfyUI AnyNode: Any Node you ask for enables the auto-generation of functional nodes using LLMs, allowing for the creation of complex workflows. It supports API compatibility with OpenAI, LocalLLMs, and Gemini.
ComfyUI Iterative Mixing Nodes enhance image generation by iteratively refining outputs. Key nodes include Iterative Mixing KSampler, Batch Unsampler, and Iterative Mixing KSampler Advanced, optimizing the sampling process for improved results.
ComfyUI-JDCN offers custom nodes for efficient file and data management, including saving and importing latents, converting lists to strings, moving files, and batch loading images, enhancing ComfyUI's functionality.
ComfyUI DenseDiffusion is a custom node for ComfyUI, designed to enhance image generation by leveraging dense diffusion models. It integrates advanced AI techniques to produce high-quality, detailed visuals within the ComfyUI framework.
ComfyUI-IF_AI_tools offers various AI tools for Comfy UI, starting with visual language (VL) and prompt creation tools using Ollma as the backend, with plans for further development as time permits.
ComfyUI-DynamiCrafterWrapper integrates DynamiCrafter's image2video and frame interpolation models into ComfyUI, also supporting ToonCrafter for enhanced animation and video creation capabilities.
Lora-Training-in-Comfy simplifies the creation of LoRA models within ComfyUI. It ensures users have access to the latest nodes for efficient model training, enhancing the overall user experience.
ComfyUI-TCD integrates ComfyUI with the TCD framework, enhancing user interface capabilities by leveraging TCD's features. This extension streamlines UI development, offering a seamless experience for developers.
Vector_Sculptor_ComfyUI enhances compositions by adjusting conditioning towards similar concepts for enrichment or precision. It uses cosine similarity scores to gather pre-conditioning vectors, stopping when scores increase, thus setting a relative direction for similar concepts.
VLM_nodes offers custom nodes for Vision Language Models (VLM) and Large Language Models (LLM), enabling image captioning, automatic prompt generation, creative and consistent prompt suggestions, and keyword extraction.
ComfyUI-SuperBeasts offers custom HDR effect nodes for ComfyUI, developed by SuperBeasts.AI. These nodes enhance the user interface with advanced visual effects, tailored for creative projects.
ComfyUI Whisper enables audio transcription and video subtitling within ComfyUI, streamlining the process of converting spoken content into text and adding accurate subtitles to video files.
ComfyUI-Mana-Nodes enhances ComfyUI with features like font animation, speech recognition, caption generation, and text-to-speech (TTS), providing advanced multimedia capabilities for dynamic user interfaces.
Geowizard depth and normal estimation in ComfyUI is a diffusers (0.27.2) wrapper node that automatically downloads the Geowizard model from Huggingface to ComfyUI/models/diffusers/geowizard, enabling depth and normal estimation functionalities.
ComfyUI_WordCloud generates visual word clouds from text data using nodes like Word Cloud and Load Text File, enabling users to create graphical representations of word frequency and importance within a given text.
ResAdapter for ComfyUI integrates the ResAdapter node into ComfyUI, enabling users to utilize the ResAdapter functionality seamlessly within the ComfyUI environment.
Tripo for ComfyUI integrates custom nodes into ComfyUI, enabling users to generate 3D models from text and image prompts using Tripo's advanced 3D creation capabilities.
ComfyUI-ZeroShot-MTrans is an unofficial ComfyUI custom node enabling Zero-Shot Material Transfer from a single image. It transfers material properties (e.g., gold) from an exemplar image to an input image (e.g., an apple) while maintaining accurate lighting and consistency.
ComfyUI_StreamDiffusion is a straightforward implementation of StreamDiffusion, designed to enable real-time interactive generation within the ComfyUI framework.
ComfyUI_VisualStylePrompting integrates Visual Style Prompting with Swapping Self-Attention into ComfyUI, enhancing image generation by allowing dynamic style changes through self-attention mechanisms.
Comfyui-SAL-VTON enables virtual try-on for models in ComfyUI by linking garments with persons using semantically associated landmarks, based on the 2023 paper by Keyu Y. and Tingwei G.
SuperPrompter Node for ComfyUI leverages the SuperPrompt-v1 model from Hugging Face to generate text from user prompts, offering multiple parameters to fine-tune the text generation process.
comfyUI_TJ_NormalLighting is a custom node for comfyUI that utilizes normal maps to apply virtual lighting effects to images, enhancing visual depth and realism.
Cozy Human Parser is a ComfyUI node designed to automatically extract masks for body regions and clothing/fashion items, developed by the CozyMantis squad.
APISR IN COMFYUI is an unofficial implementation designed to enhance both image and video processing within the ComfyUI framework.
ComfyUI-AniPortrait is an extension designed to enhance ComfyUI by integrating AniPortrait features. It focuses on improving user interface aesthetics and functionality, providing a seamless experience for creating and managing animated portraits.
ComfyUI-BiRefNet-ZHO enhances the BiRefNet integration within ComfyUI, supporting both image and video processing for improved performance and functionality.
ComfyUI-CCSR is an upscaler node for ComfyUI, designed to enhance image resolution. It integrates seamlessly into the ComfyUI framework, providing users with advanced upscaling capabilities for improved image quality.
DepthFM IN COMFYUI is an unofficial implementation of DepthFM, designed to integrate depth-based feature mapping into the ComfyUI framework, enhancing its capability to process and visualize depth information in images.
ComfyUI-Diffusers integrates the diffuser pipeline into ComfyUI, enhancing its functionality by allowing users to utilize advanced diffusion techniques within the ComfyUI framework.
ComfyUI-Gemini integrates Gemini-pro and Gemini-pro-vision into ComfyUI, enhancing its functionality with advanced features and improved user experience.
ComfyUI HiDiffusion offers simple custom nodes for testing and utilizing HiDiffusion technology, enhancing the functionality and experimentation capabilities within the ComfyUI framework.
ComfyUI-IPAnimate generates high-definition, controllable videos frame by frame using IPAdapter+ControlNet, avoiding the blurriness associated with AnimateDiff.
ComfyUI-InstantID is an unofficial implementation of InstantID for ComfyUI, designed to integrate InstantID's functionalities into the ComfyUI framework, enhancing its capabilities.
ComfyUI-MuseV is an extension for ComfyUI that enhances user interface design by integrating advanced visual elements and interactive features. It streamlines the creation of visually appealing and user-friendly interfaces.
ComfyUI PhotoMaker (ZHO) is an unofficial implementation of TencentARC's PhotoMaker for ComfyUI, designed to enhance photo editing capabilities within the ComfyUI framework.
ComfyUI-Qwen-VL-API integrates QWen-VL-Plus and QWen-VL-Max into ComfyUI, enhancing its visual language processing capabilities. This extension optimizes image and text analysis within the ComfyUI framework.
ComfyUI-RAVE is an unofficial implementation of RAVE within the ComfyUI framework, designed to integrate RAVE's video processing capabilities into ComfyUI, enhancing its functionality for video-related tasks.
ComfyUI-depth-fm provides fast and accurate monocular depth estimation, enhancing image analysis by determining depth from single images efficiently.
ComfyUI-moondream is an image-to-text query node with batch processing capabilities, enabling efficient conversion of multiple images to text within the ComfyUI framework.
Comfyui_image2prompt is an extension for ComfyUI that converts images to text using nodes like Image to Text and Loader Image to Text Model. It facilitates seamless image-to-text transformation within the ComfyUI framework.
ComfyUI InstantID Faceswapper integrates InstantID's faceswap technology into ComfyUI, enabling efficient face-swapping using LCM Lora for high-quality results in minimal steps. It is compatible exclusively with SDXL checkpoints.
Comfyui-prompt-composer is a tool suite for managing prompts, allowing users to sequence and group strings using nodes. These nodes can be chained in any order, facilitating flexible and logical prompt creation.
ComfyUI-TCD-scheduler integrates custom sampler nodes implementing Zheng et al.'s Trajectory Consistency Distillation, enhancing ComfyUI with advanced sampling techniques for improved performance.
ComfyUI jank HiDiffusion is an experimental implementation of HiDiffusion for ComfyUI, aiming to integrate advanced diffusion techniques into the user interface.
komojini-comfyui-nodes provides custom nodes for ComfyUI, specifically designed for video generation, including a YouTube Video Loader node.
ComfyUI-APISR integrates APISR upscale models into ComfyUI, enhancing image resolution capabilities. Note: The repository name has been updated to ComfyUI-APISR-KJ.
ComfyUI-DDColor integrates the DDColor tool into ComfyUI, enabling users to utilize advanced color manipulation features within the ComfyUI environment.
DiffusionLight implementation for ComfyUI simplifies the creation of light probes using the DiffusionLight method. It requires placing the included LoRA, converted from the original diffusers, into the ComfyUI/loras folder.
ComfyUI-Dream-Interpreter interprets your dreams and immerses you within them, providing a unique, interactive experience based on your dream narratives.
ComfyUI-SUPIR integrates SUPIR upscaling into ComfyUI via wrapper nodes, enhancing image resolution within the interface.
Tiled Diffusion & VAE for ComfyUI allows large image drawing and upscaling with limited VRAM using advanced diffusion tiling algorithms, Mixture of Diffusers and MultiDiffusion, along with pkuliyi2015's Tiled VAE algorithm.
ComfyUI-ELLA integrates ComfyUI with ELLA, a tool developed by TencentQQGYLab, enhancing user interface capabilities for streamlined and efficient operations.
ComfyUI PhotoMaker Plus integrates PhotoMaker models into ComfyUI, enhancing photo editing capabilities. Users must delete the old custom_nodes/ComfyUI-PhotoMaker directory and reinstall due to a repository name change.
© Copyright 2024 RunComfy. All Rights Reserved.