Visit ComfyUI Online for ready-to-use ComfyUI environment
Detect and extract face bounding boxes in images using face analysis models for accurate face location and extraction automation.
The FaceBoundingBox
node is designed to detect and extract the bounding boxes of faces within an image. This node leverages face analysis models to identify the coordinates of faces and provides the cropped face images along with their respective bounding box dimensions. The primary benefit of this node is its ability to accurately locate faces in an image, which can be particularly useful for tasks such as face recognition, facial feature analysis, and image editing. By using this node, you can automate the process of face detection and extraction, saving time and ensuring consistency in your workflow.
This parameter represents the face analysis models used to detect faces in the image. These models are responsible for identifying the coordinates of the faces and extracting the bounding boxes. The accuracy and performance of the face detection depend on the quality and configuration of these models.
The image
parameter is the input image or a list of images in which faces need to be detected. The images should be in a format that can be processed by the face analysis models, typically in RGB format. The quality and resolution of the input images can impact the accuracy of face detection.
The padding
parameter specifies the number of pixels to add around the detected face bounding box. This can be useful to include some context around the face or to ensure that the entire face is captured. The default value is 0, and it can be adjusted based on the desired output.
The padding_percent
parameter defines the percentage of the face bounding box dimensions to add as padding. This allows for a proportional increase in the bounding box size, ensuring that the padding scales with the size of the detected face. The default value is 0%, and it can be adjusted to include more or less surrounding area.
The index
parameter allows you to specify which detected face to return when multiple faces are detected in an image. By default, it is set to -1, which means all detected faces will be returned. If set to a specific index, only the face at that index will be returned. This can be useful when you are interested in a particular face in the image.
The out_img
parameter is a list of cropped face images extracted from the input image. Each image corresponds to a detected face and includes any specified padding. These images can be used for further analysis or processing.
The out_x
parameter is a list of the x-coordinates of the top-left corner of each detected face bounding box. These coordinates indicate the horizontal position of the faces within the input image.
The out_y
parameter is a list of the y-coordinates of the top-left corner of each detected face bounding box. These coordinates indicate the vertical position of the faces within the input image.
The out_w
parameter is a list of the widths of each detected face bounding box. These values represent the horizontal dimensions of the bounding boxes.
The out_h
parameter is a list of the heights of each detected face bounding box. These values represent the vertical dimensions of the bounding boxes.
padding
and padding_percent
parameters to include more context around the detected faces if needed.index
parameter to focus on a specific face when multiple faces are detected in an image.index
parameter is greater than the number of detected faces.index
parameter is within the valid range. If you want to select a specific face, make sure the index corresponds to an existing face.© Copyright 2024 RunComfy. All Rights Reserved.