Visit ComfyUI Online for ready-to-use ComfyUI environment
Converts pose detection data into a list of key points for AI artists using OpenPose model.
The NegiTools_OpenPoseToPointList
node is designed to convert pose detection data into a list of key points, making it easier for AI artists to work with pose information in their projects. This node leverages the OpenPose model to detect various body parts and then processes this data to output a structured list of points. The primary benefit of this node is its ability to simplify the complex data generated by pose detection into a more manageable format, which can be used for further processing or visualization. Whether you are focusing on facial features, hand positions, or the entire body, this node provides a flexible and efficient way to extract and utilize pose data.
This parameter expects an image input that will be used for pose detection. The image should be in a format that the node can process, typically a tensor representation of the image.
This integer parameter sets the resolution at which the pose detection will be performed. Higher resolutions can provide more detailed pose information but may require more computational resources. The default value is 512, with a minimum of 64 and a maximum of 2048, adjustable in steps of 64. This parameter is presented as a slider for ease of use.
This parameter determines the specific type of pose data to extract. It accepts three options: "face", "hand", and "all". Choosing "face" will focus on key points related to facial features, "hand" will extract wrist positions, and "all" will provide a comprehensive list of all detected key points. This allows you to tailor the output to your specific needs.
This output is a JSON-formatted string that contains the list of key points detected in the image. The structure of this list varies depending on the selected method, providing either facial key points, hand positions, or a full set of body key points. This output is essential for further processing or visualization tasks.
This output is the processed image with the detected poses drawn on it. It serves as a visual confirmation of the detected poses and can be used for debugging or presentation purposes.
© Copyright 2024 RunComfy. All Rights Reserved.