IC-Light is an AI-based image editing tool that integrates with Stable Diffusion models to perform localized edits on generated images. It works by encoding the image into a latent space representation, applying edits to specific regions, and then decoding the modified latent representation back into an image. This approach allows for precise control over the editing process while preserving the overall style and coherence of the original image.
Now there’re two models released: text-conditioned relighting model and background-conditioned model. Both types take foreground images as inputs.
Under the hood, IC-Light leverages the power of Stable Diffusion models to encode and decode images. The process can be broken down into the following steps:
2.1. Encoding: The input image is passed through the Stable Diffusion VAE (Variational Autoencoder) to obtain a compressed latent space representation. 2.2. Editing: The desired edits are applied to specific regions of the latent representation. This is typically done by concatenating the original latent with a mask indicating the areas to be modified, along with the corresponding edit prompts. 2.3. Decoding: The modified latent representation is passed through the Stable Diffusion decoder to reconstruct the edited image. By operating in the latent space, IC-Light can make localized edits while maintaining the overall coherence and style of the image.
The main node you'll be working with is the "IC-Light Apply" node, which handles the entire process of encoding, editing, and decoding your image.
The "IC-Light Apply" node requires three main inputs:
To create the c_concat input:
After processing your inputs, the "IC-Light Apply" node will output a single parameter:
To generate your final edited image, simply connect the output model to the appropriate nodes in your ComfyUI workflow, such as the KSampler and VAEDecode nodes.
For more information, please visit
© Copyright 2024 RunComfy. All Rights Reserved.