logo
RunComfy
ComfyUIPlaygroundPricing
discord logo
ComfyUI>Workflows>LivePortrait | Animate Portraits | Vid2Vid

LivePortrait | Animate Portraits | Vid2Vid

Workflow Name: RunComfy/LivePortrait-Vid2Vid
Workflow ID: 0000...1108
The ComfyUI LivePortrait Vid2Vid workflow transfers facial expressions and movements from a driving video to a source video. By analyzing and applying these elements, it creates a realistic and dynamic output, allowing for advanced manipulation and animation of facial performances.

Thanks to kijai's ComfyUI-LivePortraitKJ node and workflow, creating realistic LivePortrait animations in ComfyUI is now easier. The following is a breakdown of the key components and parameters of his workflow.

Please read the LivePortrait Img2Vid description first to understand the workflow steps. After familiarizing yourself with the LivePortrait Img2Vid process, you will notice some small changes between the LivePortrait Vid2Vid and Img2Vid workflows.

The Difference Between ComfyUI LivePortrait Vid2Vid and Img2Vid

1. Load videos using "VHS_LoadVideo" instead of images

  • In this LivePortrait Img2Vid workflow, you load a static image as the source using the "LoadImage" node. However, in the Vid2Vid workflow, you need to load a video as the source instead. Adjust the "frame_load_cap" to control how many frames are loaded.
  • Resize source video to a higher resolution like 1024x1024 for better quality. After loading the source video with "VHS_LoadVideo", use the "ImageResizeKJ" node to upscale the frames to a resolution like 1024x1024. This will help maintain sharpness and detail in the final output. When working with videos, it's recommended to use a higher resolution for the source compared to the Img2Vid workflow. While 512x512 is often sufficient for static images, videos benefit from higher resolutions to preserve detail and quality.
  • The driving video frames can still be resized to a lower resolution like 480x480 to save processing time, as they only provide motion information.

2. Use "source_video_smoothed" relative motion mode for smoother LivePortrait Vid2Vid results

  • The "LivePortraitProcess" node has a "relative_motion_mode" parameter that controls how motion is transferred from the driving video to the source. For Vid2Vid, it's recommended to use the "source_video_smoothed" mode.
  • In this mode, the LivePortrait motion is smoothed over time based on the input video, which helps create more temporally coherent and stable results. This is especially important for videos, where sudden jumps or jitter in motion can be more noticeable than in single images.
  • Other motion modes like "relative" or "single_frame" may work better for Img2Vid, but "source_video_smoothed" is typically the best choice for Vid2Vid.

3. Connect source video FPS and audio to "VHS_VideoCombine" to maintain audio sync for LivePortrait Vid2Vid

  • When creating the final output video with the "VHS_VideoCombine" node, it's important to maintain audio synchronization with the video frames. This involves two key connections:
  • First, connect the source video's audio to the "audio" input of "VHS_VideoCombine" using a "Reroute" node. This will ensure the original audio is used in the output video.
  • Second, connect the source video's frame rate (FPS) to the "frame_rate" input of "VHS_VideoCombine". You can get the FPS using the "VHS_VideoInfo" node, which extracts metadata from the source video. This will ensure the output video matches the timing of the source.
  • By carefully handling the audio and frame rate, you can create a LivePortrait Vid2Vid output that maintains proper synchronization and timing, which is crucial for a realistic and watchable result.

Want More ComfyUI Workflows?

Advanced Live Portrait | Parameter Control

Use customizable parameters to control every feature, from eye blinks to head movements, for natural results.

LivePortrait | Animate Portraits | Img2Vid

Animate portraits with facial expressions and motion using a single image and reference video.

MimicMotion | Human Motion Video Generation

Generate high-quality human motion videos with MimicMotion, using a reference image and motion sequence.

Stable Diffusion 3.5

Stable Diffusion 3.5

Stable Diffusion 3.5 (SD3.5) for high-quality, diverse image generation.

IPAdapter Plus (V2) Attention Mask | Image to Video

Leverage the IPAdapter Plus Attention Mask for precise control of the image generation process.

Face to Many | 3D, Emoji, Pixel, Clay, Toy, Video game

utilizes LoRA models, ControlNet, and InstantID for advanced face-to-many transformations

ComfyUI Img2Vid | Morphing Animation

Morphing animation with AnimateDiff LCM, IPAdapter, QRCode ControlNet, and Custom Mask modules.

Anyline + MistoLine | High-Quality Sketch to Image

Anyline + MistoLine | High-Quality Sketch to Image

MistoLine adapts to various line art inputs, effortlessly generating high-quality images from sketches.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.