Compare Stable Diffusion 3.5 and FLUX.1 in one ComfyUI workflow.
Stable Diffusion 3.5 (SD3.5) for high-quality, diverse image generation.
Blend FLUX-1 model with FLUX-RealismLoRA for photorealistic AI images
Text to Video Demo Using the Genmo Mochi 1 Model
Transfer facial expressions and movements from a driving video onto a source video
Streamline AI character creation and ensure uniform appearances.
Convert your video into clay style using Unsampling method.
Create consistent and realistic characters with precise control over facial features, poses, and compositions.
Virtual try-on creating realistic results by capturing garment details and style.
Transfers the motion and style from a source video onto a target image or object.
Use customizable parameters to control every feature, from eye blinks to head movements, for natural results.
Generate high-quality human motion videos with MimicMotion, using a reference image and motion sequence.
Convert your video into parchment-style animations using Unsampling method.
Effortlessly restyle videos, converting realistic characters into anime while keeping the original backgrounds intact.
Achieve better control with FLUX-ControlNet-Depth & FLUX-ControlNet-Canny for FLUX.1 [dev].
Effortlessly fill, remove, and refine images, seamlessly integrating new content.
Guide you through the entire process of training FLUX LoRA models using your custom datasets.
Adapt pre-trained models to specific image styles for stunning 512x512 and 1024x1024 visuals.
A new image generation model developed by Black Forest Labs
Animate portraits with facial expressions and motion using a single image and reference video.
With ComfyUI ReActor, you can easily swap the faces of one or more characters in images or videos.
Controlling latent noise with Unsampling helps dramatically increase consistency in video style transfer.
Transform your subjects and have them travel through different scenes seamlessly.
Transform your subjects and give them pulsating, music-driven auras that dance to the rhythm.
CogVideoX-5B: Advanced text-to-video model for high-quality video generation.
Elevate your product photography effortlessly, a top alternative to Magnific.AI Relight.
Relight your videos with light maps and prompts
Edit backgrounds, enhance lighting, and regenerate new scenes easily.
Upscale images to 8K with SUPIR and 4x Foolhardy Remacri model.
Morphing animation with AnimateDiff LCM, IPAdapter, QRCode ControlNet, and Custom Mask modules.
Use IPAdapter Plus and ControlNet for precise style transfer with a single reference image.
Use IPAdapter Plus, ControlNet QRCode, and AnimateLCM to create morphing videos quickly.
Achieve motion graphics animation effects starting from a pre-existing video input.
Create stunning Houdini-like animations with Z-Depth Maps using only 2D image.
ToonCrafter can generate cartoon interpolations between two cartoon images.
Object segmentation of videos with unrivaled accuracy.
Input a video and light masks to generate a relighting video
Use Blender to set up 3D scenes and generate image sequences, then use ComfyUI for AI rendering.
Render visuals in ComfyUI and sync audio in TouchDesigner for dynamic audio-reactive videos.
Leverage the IPAdapter Plus Attention Mask for precise control of the image generation process.
Convert images to animations with ComfyUI IPAdapter Plus and ControlNet QRCode.
Tested for looping video and frame interpolation. Better than closed-source video gen in certain scenarios
Accelerate your text-to-video animation using the ComfyUI AnimateLCM Workflow.
With IPAdapter, you can efficiently control the generation of animations using reference images.
Batch Prompt schedule with AnimateDiff offers precise control over narrative and visuals in animation creation.
Explore AnimateDiff V3, AnimateDiff SDXL and AnimateDiff V2, and use Upscale for high-resolution results.
Utilize IPAdapters for static image generation and Stable Video Diffusion for dynamic video generation.
Incorporate FreeU with SVD to improve image-to-video conversion quality without additional costs.
Integrate Stable Diffusion and Stable Video Diffusion to convert text directly into video.
Set ControlNet Timestep KeyFrames, such as the first and last frames, to create morphing animations.
Utilize Dynamic Prompts (Wildcards), Animatediff, and IPAdapter to generate dynamic animations or GIFs.
Utilize Prompts Travel with Animatediff for precise control over specific frames within the animation.
Enhance Vid2Vid creativity by focusing on the composition and masking of your original video.
The ComfyUI Vid2Vid offers two distinct workflows to creating high-quality, professional animations: Vid2Vid Part 1, which enhances your creativity by focusing on the composition and masking of your original video, and Vid2Vid Part 2, which utilizes SDXL Style Transfer to transform the style of your video to match your desired aesthetic. This page specifically covers Vid2Vid Part 1
Elevate your videos with a transformation into distinctive ceramic art, infusing them with creativity.
Transform your videos into timeless marble sculptures, capturing the essence of classic art.
Convert the original video into the desired animation by using only a few images to define the preferred style.
Give your videos a playful twist by transforming them into lively cartoons.
Transform your videos into mesmerizing Japanese anime.
Give your videos a unique anime makeover effortlessly, capturing the vibrant flat style
Revolutionize videos into the style of adventure games, bringing the thrill of gaming to life!
Create captivating visual effects with AnimateDiff and ControlNet (featuring QRCode Monster and Lineart).
Enhance VFX with AnimateDiff, AutoMask, and ControlNet for precise, controlled outcomes.
Discover the innovative use of IPAdapter to create stunning motion art.
Faster image generation and better resource management.
Merge visuals and prompts for stunning, enhanced results.
Create realistic personalized photos from text prompts while preserving identity
utilizes LoRA models, ControlNet, and InstantID for advanced face-to-many transformations
Combine IPAdapter and ControlNet for efficient texture application and enhanced visuals.
Integrate Stable Diffusion 3 medium into your workflow to produce exceptional AI art.
MistoLine adapts to various line art inputs, effortlessly generating high-quality images from sketches.
Omost uses LLM coding to generate precise, high-quality images.
Integrate face identities and control styles seamlessly with PuLID and IPAdapter Plus.
Use IPAdapter Plus for your fashion model creation, easily changing outfits and styles
IPAdapter Plus enables effective style & composition transfer, functioning like a 1-image LoRA.
Use various merging methods with IPAdapter Plus for precise, efficient image blending control.
Use LayerDiffuse to generate transparent images or blend backgrounds and foregrounds with one another.
Utilize Instant ID and IPAdapter to create customizable, amazing face stickers.
InstantID accurately enhances and transforms portraits with style and aesthetic appeal.
Stable Cascade, a text-to-image model excelling in prompt alignment and aesthetics.
Leverage IPAdapter FaceID Plus V2 model to create consistent characters.
Use the Portrait Master for greater control over portrait creations without relying on complex prompts.
Experience fast text-to-image synthesis with SDXL Turbo.
Efficiently removes backgrounds by comparing BRIA AI's RMBG 1.4 with Segment Anything.
Easily extend images using outpainting node and ControlNet inpainting model.
SUPIR enables photo-realistic image restoration, works with SDXL model, and supports text-prompt enhancement.
The CCSR model enhances image and video upscaling by focusing more on content consistency.
The APISR model enhances and restores anime images and videos, making your visuals more vibrant and clearer.
Revive faded photos into vibrant memories, preserving every detail for cherished reminiscence.
Use Face Detailer first for facial restoration, followed by the 4x UltraSharp Model for superior upscaling.
Mesh Graphormer ControlNet corrects malformed hands in images while preserving the rest.
Use ControlNet Tile, 4xUltraSharp, and frame interpolation for a high-resolution outcome.
Use LayerDiffuse for image transparency and TripoSR for quick 3D object creation
© Copyright 2024 RunComfy. All Rights Reserved.