Visit ComfyUI Online for ready-to-use ComfyUI environment
Evaluate image similarity using YOLO object detection for detailed comparison in AI applications.
The YOLOSimilarityCompare
node is designed to evaluate the similarity between two images by analyzing the objects detected within them. It leverages the YOLO (You Only Look Once) object detection framework to identify and compare various attributes of the detected objects, such as class overlap, spatial arrangement, confidence levels, size, and relational distances. This node is particularly beneficial for tasks that require a nuanced understanding of how similar two images are in terms of their content and structure. By providing a comprehensive similarity score, it helps in applications like image retrieval, content-based image comparison, and automated image analysis. The node's ability to break down the similarity into different weighted components allows for a customizable and detailed comparison, making it a powerful tool for AI artists and developers who need to assess image similarity beyond mere pixel comparison.
This parameter represents the detection results from the first image, obtained using the YOLO object detection framework. It includes information about the detected objects, such as their classes, confidence scores, and bounding box coordinates. This input is crucial as it forms the basis for comparison with the second image.
Similar to ULTRALYTICS_RESULTS1
, this parameter contains the detection results from the second image. It is used in conjunction with the first image's results to compute the similarity score, allowing for a detailed comparison of the objects detected in both images.
This parameter determines the importance of class similarity in the overall similarity score. It ranges from 0.0 to 1.0, with a default value of 0.3. A higher value places more emphasis on the types of objects detected in both images, making it crucial for tasks where object type matching is significant.
This parameter controls the weight of spatial similarity, which assesses how similarly objects are positioned in both images. It ranges from 0.0 to 1.0, with a default value of 0.2. Adjusting this weight is important for applications where the relative positioning of objects is a key factor.
This parameter influences the weight of confidence similarity, which compares the confidence levels of object detections between the two images. It ranges from 0.0 to 1.0, with a default value of 0.2. This is useful for scenarios where the reliability of object detection is a priority.
This parameter sets the weight for size similarity, which evaluates the size of detected objects in both images. It ranges from 0.0 to 1.0, with a default value of 0.15. This is particularly relevant for tasks where the scale of objects is an important consideration.
This parameter determines the weight of relationship similarity, which compares the distances between objects in both images. It ranges from 0.0 to 1.0, with a default value of 0.15. This is essential for applications where the spatial relationships between objects are critical.
This parameter sets the threshold for determining whether the similarity score is considered significant. It ranges from 0.0 to 1.0, with a default value of 0.5. A higher threshold means that only more similar images will be considered above the threshold, which is useful for filtering out less relevant comparisons.
This output provides a floating-point value representing the overall similarity score between the two images. It is a weighted combination of the different similarity components, offering a comprehensive measure of how alike the images are.
This boolean output indicates whether the computed similarity score meets or exceeds the specified threshold. It helps in quickly determining if the images are considered similar based on the defined criteria.
This string output offers a detailed explanation of the similarity score, including the individual contributions of each similarity component and the detected classes in both images. It provides valuable insights into the comparison process, making it easier to understand the factors influencing the similarity score.
class_weight
to prioritize object type matching when comparing images with similar content but different object arrangements.spatial_weight
for tasks where the relative positioning of objects is crucial, such as in layout analysis or scene understanding.confidence_weight
when the reliability of object detection is important, ensuring that only high-confidence detections are emphasized in the similarity score.threshold
to filter out less relevant comparisons, especially in applications where only highly similar images are of interest.ULTRALYTICS_RESULTS1
or ULTRALYTICS_RESULTS2
is not in the expected format.threshold
parameter is set outside the allowable range of 0.0 to 1.0.threshold
value to be within the specified range to ensure proper functionality.RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.