ComfyUI  >  Nodes  >  audio-separation-nodes-comfyui >  AudioTempoMatch

ComfyUI Node: AudioTempoMatch

Class Name

AudioTempoMatch

Category
audio
Author
christian-byrne (Account age: 1364 days)
Extension
audio-separation-nodes-comfyui
Latest Updated
7/9/2024
Github Stars
0.0K

How to Install audio-separation-nodes-comfyui

Install this extension via the ComfyUI Manager by searching for  audio-separation-nodes-comfyui
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter audio-separation-nodes-comfyui in the search bar
After installation, click the  Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • High-speed GPU machines
  • 200+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 50+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

AudioTempoMatch Description

Synchronize audio track tempos for seamless mixing and layering.

AudioTempoMatch:

AudioTempoMatch is a powerful node designed to synchronize the tempo of two audio tracks, making it an essential tool for audio artists and producers who need to match the beats of different audio sources. This node leverages advanced signal processing techniques to estimate the tempo of each input audio track and then adjusts their playback rates to achieve a consistent tempo across both tracks. By doing so, it ensures that the audio tracks can be seamlessly mixed or layered without any rhythmic discrepancies. The primary goal of AudioTempoMatch is to facilitate the creation of harmonious and rhythmically aligned audio compositions, making it easier for you to work with multiple audio sources in your projects.

AudioTempoMatch Input Parameters:

audio_1

This parameter represents the first audio input that you want to synchronize. It should be provided in the form of an AUDIO object, which includes the waveform and sample rate of the audio. The waveform is a tensor that contains the audio signal data, while the sample rate indicates the number of samples per second. The quality and characteristics of this audio input will directly impact the tempo estimation and synchronization process.

audio_2

This parameter represents the second audio input that you want to synchronize with the first audio. Similar to audio_1, it should be provided as an AUDIO object, containing both the waveform and sample rate. The node will analyze this audio input to estimate its tempo and adjust it accordingly to match the tempo of audio_1. Ensuring that both audio inputs are of good quality will enhance the accuracy of the tempo matching process.

AudioTempoMatch Output Parameters:

AUDIO

The first output is an AUDIO object that contains the synchronized version of the first input audio. This output includes the adjusted waveform and retains the original sample rate. The waveform is modified to match the average tempo calculated from both input audios, ensuring that it aligns rhythmically with the second audio.

AUDIO

The second output is an AUDIO object that contains the synchronized version of the second input audio. Similar to the first output, this includes the adjusted waveform and the original sample rate. The waveform is modified to match the average tempo, ensuring that it aligns rhythmically with the first audio. Both outputs are designed to be used together for seamless audio mixing or layering.

AudioTempoMatch Usage Tips:

  • Ensure that both input audio tracks are of good quality and have clear rhythmic elements to improve the accuracy of tempo estimation.
  • Use this node when you need to mix or layer multiple audio tracks in a project, as it helps maintain a consistent tempo across all tracks.
  • Experiment with different audio inputs to understand how the node adjusts the tempo and to achieve the desired rhythmic alignment.

AudioTempoMatch Common Errors and Solutions:

Expected waveform to be [channels, frames], got {waveform.shape}

  • Explanation: This error occurs when the input waveform does not have the expected shape of [channels, frames].
  • Solution: Ensure that the input audio waveform is correctly formatted as a tensor with the shape [channels, frames]. You may need to preprocess your audio data to match this format.

Expected complex STFT, got dtype {stft.dtype}

  • Explanation: This error indicates that the Short-Time Fourier Transform (STFT) did not produce a complex-valued spectrogram as expected.
  • Solution: Verify that the STFT parameters are correctly set and that the input waveform is properly prepared for the STFT process. Ensure that the return_complex parameter is set to True.

Expected Time: {expected_time}, Stretched Time: {stretched_stft.shape[2]}

  • Explanation: This error occurs when the time dimension of the stretched STFT does not match the expected time after phase vocoding.
  • Solution: Check the rate parameter and ensure that it is correctly calculated. Verify that the STFT and inverse STFT processes are correctly implemented and that the input parameters are consistent.

AudioTempoMatch Related Nodes

Go back to the extension to check out more related nodes.
audio-separation-nodes-comfyui
RunComfy

© Copyright 2024 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals.