Create consistent characters and ensure they look uniform using your images.
Create stunningly lifelike image with Flux UltraRealistic LoRA V2
Turn simple footage into epic film scenes with CogVideoX, ControlNet, and Live Portrait.
Advanced audio-driven lip sync technology.
Trellis is an advanced Image-to-3D model for high-quality 3D assets generation.
Professional face swapping toolkit for ComfyUI that enables natural face replacement and enhancement.
Generates videos from text prompts.
Transform dance videos with scene editing, face-swapping, and motion preservation.
Official Flux Tools - Flux Fill for Inpainting and Outpainting
Take your face swapping projects to new heights with Flux PuLID.
Create consistent and realistic characters with precise control over facial features, poses, and compositions.
Official Flux Tools - Flux Depth and Canny ControlNet Model
Generates videos from image+text prompts.
Combine text prompt and source video to generate new video.
CatVTON for easy and accurate virtual try-on.
Blend FLUX-1 model with FLUX-RealismLoRA for photorealistic AI images
Generate high-quality human motion videos with MimicMotion, using a reference image and motion sequence.
Flux Upscaler – Achieve 4k, 8k, 16k, and Ultimate 32k Resolution!
Transfer facial expressions and movements from a driving video onto a source video
OmniGen: Modify Images Based on Reference Images and Prompts
Effortlessly restyle videos, converting realistic characters into anime while keeping the original backgrounds intact.
Official Flux Tools - Flux Redux for Image Variation and Restyling
A new image generation model developed by Black Forest Labs
Guide you through the entire process of training FLUX LoRA models using your custom datasets.
Upscale images to 8K with SUPIR and 4x Foolhardy Remacri model.
Create consistent characters and ensure they look uniform by inputting text.
Effortlessly fill, remove, and refine images, seamlessly integrating new content.
MMAudio: Advanced video-to-audio model for high-quality audio generation.
Create stunning 3D content with Stable Fast 3D and ComfyUI 3D Pack.
Transfers the motion and style from a source video onto a target image or object.
Virtual try-on creating realistic results by capturing garment details and style.
CogVideoX Fun: Advanced video-to-video model for high-quality video generation.
Merge visuals and prompts for stunning, enhanced results.
Generate 360-degree views of anything from a single image or description.
Achieve better control with FLUX-ControlNet-Depth & FLUX-ControlNet-Canny for FLUX.1 [dev].
With ComfyUI ReActor, you can easily swap the faces of one or more characters in images or videos.
Create multi-view RGB images first, then transform them into 3D assets.
Elevate your product photography effortlessly, a top alternative to Magnific.AI Relight.
Subject Trajectory Video Demo for CogVideoX
Stable Diffusion 3.5 (SD3.5) for high-quality, diverse image generation.
Relight your videos with light maps and prompts
Edit backgrounds, enhance lighting, and regenerate new scenes easily.
Animate portraits with facial expressions and motion using a single image and reference video.
Use IPAdapter Plus and ControlNet for precise style transfer with a single reference image.
Use customizable parameters to control every feature, from eye blinks to head movements, for natural results.
Fluxtapoz Nodes for RF Inversion and Stylization - Unsampling and Sampling
Generate realistic talking heads and body gestures synced with the provided audio.
Mochi Edit: Modify Videos Using Text-Based Prompts and Unsampling.
Generate 3D content, from multi-view images to detailed meshes.
Explore XLabs FLUX IPAdapter V2 model compared to V1 for your creative goals.
Discover Flux and 10 versatile In-Context LoRA models for image generation.
Including both text-to-video and image-to-video mode.
Audio-driven lip-sync for portrait animation in 4K.
Enhance realism by using ControlNet to guide FLUX.1-dev.
Generate multi-view normal maps and color images for 3D assets.
Use SDXL and FLUX to expand and refine images seamlessly.
Controlling latent noise with Unsampling helps dramatically increase consistency in video style transfer.
Transform your subject with an audioreactive background made of intricate geometries.
Text to Video Demo Using the Genmo Mochi 1 Model
CogVideoX-5B: Advanced text-to-video model for high-quality video generation.
Object segmentation of videos with unrivaled accuracy.
Input a video and light masks to generate a relighting video
Morphing animation with AnimateDiff LCM, IPAdapter, QRCode ControlNet, and Custom Mask modules.
Use IPAdapter Plus, ControlNet QRCode, and AnimateLCM to create morphing videos quickly.
Create stunning Houdini-like animations with Z-Depth Maps using only 2D image.
ToonCrafter can generate cartoon interpolations between two cartoon images.
Use Blender to set up 3D scenes and generate image sequences, then use ComfyUI for AI rendering.
Render visuals in ComfyUI and sync audio in TouchDesigner for dynamic audio-reactive videos.
Convert images to animations with ComfyUI IPAdapter Plus and ControlNet QRCode.
Leverage the IPAdapter Plus Attention Mask for precise control of the image generation process.
Tested for looping video and frame interpolation. Better than closed-source video gen in certain scenarios
Accelerate your text-to-video animation using the ComfyUI AnimateLCM Workflow.
With IPAdapter, you can efficiently control the generation of animations using reference images.
Batch Prompt schedule with AnimateDiff offers precise control over narrative and visuals in animation creation.
Explore AnimateDiff V3, AnimateDiff SDXL and AnimateDiff V2, and use Upscale for high-resolution results.
Utilize IPAdapters for static image generation and Stable Video Diffusion for dynamic video generation.
Incorporate FreeU with SVD to improve image-to-video conversion quality without additional costs.
Integrate Stable Diffusion and Stable Video Diffusion to convert text directly into video.
Set ControlNet Timestep KeyFrames, such as the first and last frames, to create morphing animations.
Utilize Dynamic Prompts (Wildcards), Animatediff, and IPAdapter to generate dynamic animations or GIFs.
Utilize Prompts Travel with Animatediff for precise control over specific frames within the animation.
Convert your video into clay style using Unsampling method.
Convert your video into parchment-style animations using Unsampling method.
Transform your subjects and have them travel through different scenes seamlessly.
Transform your subjects and give them pulsating, music-driven auras that dance to the rhythm.
Achieve motion graphics animation effects starting from a pre-existing video input.
Enhance Vid2Vid creativity by focusing on the composition and masking of your original video.
The ComfyUI Vid2Vid offers two distinct workflows to creating high-quality, professional animations: Vid2Vid Part 1, which enhances your creativity by focusing on the composition and masking of your original video, and Vid2Vid Part 2, which utilizes SDXL Style Transfer to transform the style of your video to match your desired aesthetic. This page specifically covers Vid2Vid Part 1
Elevate your videos with a transformation into distinctive ceramic art, infusing them with creativity.
Transform your videos into timeless marble sculptures, capturing the essence of classic art.
Convert the original video into the desired animation by using only a few images to define the preferred style.
Give your videos a playful twist by transforming them into lively cartoons.
Transform your videos into mesmerizing Japanese anime.
Give your videos a unique anime makeover effortlessly, capturing the vibrant flat style
Revolutionize videos into the style of adventure games, bringing the thrill of gaming to life!
Create captivating visual effects with AnimateDiff and ControlNet (featuring QRCode Monster and Lineart).
Enhance VFX with AnimateDiff, AutoMask, and ControlNet for precise, controlled outcomes.
Discover the innovative use of IPAdapter to create stunning motion art.
Compare Stable Diffusion 3.5 and FLUX.1 in one ComfyUI workflow.
Adapt pre-trained models to specific image styles for stunning 512x512 and 1024x1024 visuals.
Faster image generation and better resource management.
Create realistic personalized photos from text prompts while preserving identity
utilizes LoRA models, ControlNet, and InstantID for advanced face-to-many transformations
Combine IPAdapter and ControlNet for efficient texture application and enhanced visuals.
Integrate Stable Diffusion 3 medium into your workflow to produce exceptional AI art.
MistoLine adapts to various line art inputs, effortlessly generating high-quality images from sketches.
Omost uses LLM coding to generate precise, high-quality images.
Integrate face identities and control styles seamlessly with PuLID and IPAdapter Plus.
Use IPAdapter Plus for your fashion model creation, easily changing outfits and styles
IPAdapter Plus enables effective style & composition transfer, functioning like a 1-image LoRA.
Use various merging methods with IPAdapter Plus for precise, efficient image blending control.
Use LayerDiffuse to generate transparent images or blend backgrounds and foregrounds with one another.
Utilize Instant ID and IPAdapter to create customizable, amazing face stickers.
InstantID accurately enhances and transforms portraits with style and aesthetic appeal.
Stable Cascade, a text-to-image model excelling in prompt alignment and aesthetics.
Leverage IPAdapter FaceID Plus V2 model to create consistent characters.
Use the Portrait Master for greater control over portrait creations without relying on complex prompts.
Experience fast text-to-image synthesis with SDXL Turbo.
Efficiently removes backgrounds by comparing BRIA AI's RMBG 1.4 with Segment Anything.
Easily extend images using outpainting node and ControlNet inpainting model.
SUPIR enables photo-realistic image restoration, works with SDXL model, and supports text-prompt enhancement.
The CCSR model enhances image and video upscaling by focusing more on content consistency.
The APISR model enhances and restores anime images and videos, making your visuals more vibrant and clearer.
Revive faded photos into vibrant memories, preserving every detail for cherished reminiscence.
Use Face Detailer first for facial restoration, followed by the 4x UltraSharp Model for superior upscaling.
Mesh Graphormer ControlNet corrects malformed hands in images while preserving the rest.
Use ControlNet Tile, 4xUltraSharp, and frame interpolation for a high-resolution outcome.
Use LayerDiffuse for image transparency and TripoSR for quick 3D object creation
© Copyright 2025 RunComfy. All Rights Reserved.