Visit ComfyUI Online for ready-to-use ComfyUI environment
Estimate depth from images using MiDaS model for 3D reconstruction, AR, and image editing with CPU/GPU support.
The MiDaS Depth Approximation node is designed to estimate depth information from a given image using the MiDaS (Mixed Depth and Scale) model. This node leverages advanced deep learning techniques to generate a depth map, which represents the distance of objects from the camera. The depth map can be used in various applications such as 3D reconstruction, augmented reality, and image editing. By converting 2D images into depth maps, you can add a new dimension to your creative projects, enabling more realistic and immersive experiences. The node supports different MiDaS models and can run on both CPU and GPU, providing flexibility and efficiency based on your hardware capabilities.
This parameter takes the input image for which the depth approximation is to be performed. The image should be in a tensor format compatible with the node's processing pipeline.
This parameter determines whether the computation should be performed on the CPU or GPU. Set to 'true'
to use the CPU and 'false'
to use the GPU. Using the GPU can significantly speed up the processing time if a compatible GPU is available.
This parameter specifies the type of MiDaS model to be used for depth approximation. Options include 'DPT_Large'
, 'DPT_Hybrid'
, and other supported MiDaS models. The choice of model can affect the accuracy and performance of the depth estimation.
This parameter indicates whether the depth map should be inverted. Set to 'true'
to invert the depth values, making closer objects appear darker and farther objects lighter. This can be useful for specific visual effects or further processing.
This optional parameter allows you to provide a pre-loaded MiDaS model and its corresponding transform. If not provided, the node will download and load the specified MiDaS model automatically.
This output parameter provides the resulting depth map(s) as a tensor. The depth map represents the estimated distance of objects in the input image from the camera, with pixel values indicating relative depth.
use_cpu
parameter to 'false'
if a compatible GPU is available.midas_type
) to find the one that best suits your specific application and provides the desired balance between accuracy and performance.invert_depth
parameter to adjust the visual representation of the depth map according to your needs, especially if you plan to use the depth map for further image processing or effects.midas_type
parameter is set to a valid model name and that your internet connection is stable for downloading the model if it is not already available locally.use_cpu
set to 'false'
), but no compatible CUDA device is found.use_cpu
parameter to 'true'
to use the CPU for processing.© Copyright 2024 RunComfy. All Rights Reserved.