Visit ComfyUI Online for ready-to-use ComfyUI environment
Generate depth maps from images using LeReS model for AI artists, with optional boost mode for detailed depth estimation.
The LeReS-DepthMapPreprocessor node is designed to generate depth maps from input images using the LeReS (Learning-based Reconstructive Estimation of Surface) model. This node is particularly useful for AI artists who want to add depth information to their images, which can be used for various applications such as 3D rendering, augmented reality, and more. The LeReS model is known for its accuracy and ability to handle complex scenes, making it a valuable tool for enhancing the visual depth of your projects. Additionally, the node offers an optional boost mode, which leverages the enhanced capabilities of the LeReS++ model for even more detailed depth estimation.
This parameter controls the removal of the nearest objects in the depth map. It is a float value that ranges from 0.0 to 100, with a default value of 0.0. Increasing this value will remove objects that are closer to the camera, which can be useful for focusing on background elements or reducing clutter in the depth map.
This parameter controls the removal of background objects in the depth map. It is a float value that ranges from 0.0 to 100, with a default value of 0.0. Increasing this value will remove objects that are farther from the camera, which can help in isolating foreground elements or reducing background noise in the depth map.
This parameter enables or disables the boost mode, which uses the enhanced capabilities of the LeReS++ model for more detailed depth estimation. It can be set to either "enable" or "disable," with the default value being "disable." Enabling boost mode can provide more accurate and detailed depth maps, but may require more computational resources.
The output of this node is an image that represents the depth map of the input image. The depth map is a single-channel image where the intensity of each pixel corresponds to the distance of that point from the camera. Brighter pixels represent points that are closer to the camera, while darker pixels represent points that are farther away. This depth information can be used in various applications such as 3D modeling, augmented reality, and more.
rm_nearest
parameter to remove closer objects from the depth map.rm_background
parameter to remove farther objects from the depth map.boost
parameter for more detailed and accurate depth maps, especially in complex scenes, but be aware that this may require more computational resources.rm_nearest
and rm_background
should be between 0.0 and 100.© Copyright 2024 RunComfy. All Rights Reserved.