Visit ComfyUI Online for ready-to-use ComfyUI environment
Identify and isolate objects in images based on labels using YOLO object detection models.
The DetectByLabel
node is designed to identify and isolate specific objects within an image based on their labels using the YOLO (You Only Look Once) object detection models. This node allows you to specify target labels, and it will filter the detected objects to include only those that match the specified labels. This is particularly useful for tasks where you need to focus on certain objects within an image, such as detecting specific animals, vehicles, or other predefined categories. By leveraging the power of YOLO models, DetectByLabel
provides a robust and efficient way to perform object detection with high accuracy and speed, making it an essential tool for AI artists working on projects that require precise object identification and segmentation.
This parameter represents the input image on which object detection will be performed. The image should be in a format compatible with the YOLO model, typically a tensor representation of the image.
This parameter sets the confidence threshold for object detection. Only objects detected with a confidence score equal to or higher than this threshold will be considered. The value ranges from 0.0 to 1.0, with a default value of 0.1. Adjusting this value can help filter out less certain detections, improving the accuracy of the results.
This parameter specifies the YOLO model to be used for object detection. The model should be a .pt
file located in the specified directory. The choice of model can significantly impact the detection performance and accuracy, so selecting an appropriate model for your task is crucial.
This parameter determines the type of YOLO model to be used. The available options are "YOLO-World" and "YOLOv8". Each type corresponds to different versions or configurations of the YOLO model, which may vary in terms of performance and capabilities.
This optional parameter allows you to specify the labels of the objects you want to detect. It should be a comma-separated string of labels. Only objects matching these labels will be included in the output. This is useful for focusing on specific categories of objects within an image.
This optional parameter enables or disables debug mode. When set to "on", additional debug information will be printed, which can be helpful for troubleshooting and understanding the detection process. The available options are "on" and "off".
This output parameter provides the masks for the detected objects. Each mask is a binary image where the detected object is highlighted. These masks can be used for further image processing tasks such as segmentation or overlaying on the original image.
This output parameter returns the labels of the detected objects. These labels correspond to the categories specified in the target_label
parameter and provide a textual representation of the detected objects.
This output parameter contains the bounding boxes for the detected objects. Each bounding box is represented by a tuple of coordinates (x, y, width, height) that define the location and size of the detected object within the image.
This output parameter returns the original input image with the detected objects highlighted. This can be useful for visualizing the results of the object detection process and verifying the accuracy of the detections.
confidence
parameter to filter out less certain detections, which can help improve the accuracy of the results.target_label
parameter to focus on specific objects within the image, which can be particularly useful for tasks that require detecting only certain categories of objects.debug
mode if you encounter issues or need to understand the detection process better, as it provides additional information that can be helpful for troubleshooting.debug
parameter to "on" to enable debug mode and obtain additional information for troubleshooting.© Copyright 2024 RunComfy. All Rights Reserved.