ComfyUI > Nodes > Nodes for use with real-time applications of ComfyUI > Yolo Similarity Compare 🕒🅡🅣🅝

ComfyUI Node: Yolo Similarity Compare 🕒🅡🅣🅝

Class Name

YOLOSimilarityCompare

Category
ScavengerHunt
Author
RyanOnTheInside (Account age: 3947days)
Extension
Nodes for use with real-time applications of ComfyUI
Latest Updated
2025-03-04
Github Stars
0.02K

How to Install Nodes for use with real-time applications of ComfyUI

Install this extension via the ComfyUI Manager by searching for Nodes for use with real-time applications of ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Nodes for use with real-time applications of ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Yolo Similarity Compare 🕒🅡🅣🅝 Description

Evaluate image similarity using YOLO object detection for detailed comparison in AI applications.

Yolo Similarity Compare 🕒🅡🅣🅝:

The YOLOSimilarityCompare node is designed to evaluate the similarity between two images by analyzing the objects detected within them. It leverages the YOLO (You Only Look Once) object detection framework to identify and compare various attributes of the detected objects, such as class overlap, spatial arrangement, confidence levels, size, and relational distances. This node is particularly beneficial for tasks that require a nuanced understanding of how similar two images are in terms of their content and structure. By providing a comprehensive similarity score, it helps in applications like image retrieval, content-based image comparison, and automated image analysis. The node's ability to break down the similarity into different weighted components allows for a customizable and detailed comparison, making it a powerful tool for AI artists and developers who need to assess image similarity beyond mere pixel comparison.

Yolo Similarity Compare 🕒🅡🅣🅝 Input Parameters:

ULTRALYTICS_RESULTS1

This parameter represents the detection results from the first image, obtained using the YOLO object detection framework. It includes information about the detected objects, such as their classes, confidence scores, and bounding box coordinates. This input is crucial as it forms the basis for comparison with the second image.

ULTRALYTICS_RESULTS2

Similar to ULTRALYTICS_RESULTS1, this parameter contains the detection results from the second image. It is used in conjunction with the first image's results to compute the similarity score, allowing for a detailed comparison of the objects detected in both images.

class_weight

This parameter determines the importance of class similarity in the overall similarity score. It ranges from 0.0 to 1.0, with a default value of 0.3. A higher value places more emphasis on the types of objects detected in both images, making it crucial for tasks where object type matching is significant.

spatial_weight

This parameter controls the weight of spatial similarity, which assesses how similarly objects are positioned in both images. It ranges from 0.0 to 1.0, with a default value of 0.2. Adjusting this weight is important for applications where the relative positioning of objects is a key factor.

confidence_weight

This parameter influences the weight of confidence similarity, which compares the confidence levels of object detections between the two images. It ranges from 0.0 to 1.0, with a default value of 0.2. This is useful for scenarios where the reliability of object detection is a priority.

size_weight

This parameter sets the weight for size similarity, which evaluates the size of detected objects in both images. It ranges from 0.0 to 1.0, with a default value of 0.15. This is particularly relevant for tasks where the scale of objects is an important consideration.

relationship_weight

This parameter determines the weight of relationship similarity, which compares the distances between objects in both images. It ranges from 0.0 to 1.0, with a default value of 0.15. This is essential for applications where the spatial relationships between objects are critical.

threshold

This parameter sets the threshold for determining whether the similarity score is considered significant. It ranges from 0.0 to 1.0, with a default value of 0.5. A higher threshold means that only more similar images will be considered above the threshold, which is useful for filtering out less relevant comparisons.

Yolo Similarity Compare 🕒🅡🅣🅝 Output Parameters:

similarity_score

This output provides a floating-point value representing the overall similarity score between the two images. It is a weighted combination of the different similarity components, offering a comprehensive measure of how alike the images are.

above_threshold

This boolean output indicates whether the computed similarity score meets or exceeds the specified threshold. It helps in quickly determining if the images are considered similar based on the defined criteria.

explanation

This string output offers a detailed explanation of the similarity score, including the individual contributions of each similarity component and the detected classes in both images. It provides valuable insights into the comparison process, making it easier to understand the factors influencing the similarity score.

Yolo Similarity Compare 🕒🅡🅣🅝 Usage Tips:

  • Adjust the class_weight to prioritize object type matching when comparing images with similar content but different object arrangements.
  • Use a higher spatial_weight for tasks where the relative positioning of objects is crucial, such as in layout analysis or scene understanding.
  • Set a higher confidence_weight when the reliability of object detection is important, ensuring that only high-confidence detections are emphasized in the similarity score.
  • Experiment with the threshold to filter out less relevant comparisons, especially in applications where only highly similar images are of interest.

Yolo Similarity Compare 🕒🅡🅣🅝 Common Errors and Solutions:

"No objects detected in one or both images"

  • Explanation: This error occurs when the YOLO detection framework fails to identify any objects in one or both of the input images.
  • Solution: Ensure that the images are clear and contain detectable objects. You may need to adjust the detection settings or use higher-quality images.

"Invalid input type for ULTRALYTICS_RESULTS"

  • Explanation: This error indicates that the input provided for ULTRALYTICS_RESULTS1 or ULTRALYTICS_RESULTS2 is not in the expected format.
  • Solution: Verify that the inputs are correctly formatted YOLO detection results and that they contain the necessary object detection data.

"Threshold value out of range"

  • Explanation: This error occurs when the threshold parameter is set outside the allowable range of 0.0 to 1.0.
  • Solution: Adjust the threshold value to be within the specified range to ensure proper functionality.

Yolo Similarity Compare 🕒🅡🅣🅝 Related Nodes

Go back to the extension to check out more related nodes.
Nodes for use with real-time applications of ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.