Visit ComfyUI Online for ready-to-use ComfyUI environment
Automated human part segmentation for AI artists using DeepLabV3+ model with ResNet50 backbone.
The LayerMask: HumanPartsUltra node is designed to generate a mask of human parts within an image, providing a powerful tool for AI artists who need to isolate or manipulate specific human features in their digital artwork. Utilizing the DeepLabV3+ model with a ResNet50 backbone, this node leverages advanced machine learning techniques to accurately identify and segment human parts. The model, originally trained by Keras-io and converted to ONNX format, ensures high precision and efficiency in processing images. This node is particularly beneficial for tasks that require detailed human part segmentation, such as creating composite images, applying effects to specific body parts, or enhancing certain features while leaving others untouched. By automating the segmentation process, it saves time and effort, allowing artists to focus on the creative aspects of their work.
The original image parameter is the input image that you want to process to extract human parts. This image serves as the base for generating the mask, and its quality and resolution can impact the accuracy of the segmentation. Ensure that the image is clear and well-lit for optimal results.
This parameter specifies the model used for segmentation, which in this case is the DeepLabV3+ with a ResNet50 backbone. It is pre-configured and does not require user modification, but understanding its role can help in appreciating the node's capabilities.
The rotation parameter allows you to specify any rotation applied to the image before processing. This is useful if the image is not oriented correctly and needs adjustment to ensure accurate segmentation. The value is typically in degrees, with 0 being the default for no rotation.
This boolean parameter determines whether the background should be included in the mask. Setting it to False
focuses the mask solely on human parts, which is often desired for isolating subjects from their backgrounds.
These parameters are boolean flags that allow you to enable or disable the detection of specific human parts. By setting these parameters, you can customize which parts of the human body are included in the mask. For example, enabling face
and hair
will ensure these features are part of the mask, while disabling glasses
will exclude them.
The image output is the processed version of the original input image, potentially with the human parts highlighted or otherwise modified according to the mask. This output is useful for visual verification of the segmentation results.
The mask output is a binary image where the detected human parts are highlighted. This mask can be used for further image processing tasks, such as applying effects only to the masked areas or compositing the masked parts onto a different background. The mask is returned as a tensor, which can be easily integrated into various image processing workflows.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.