Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Struggling to understand how matching takes place. (NaiveThresholdMatching) #104

Closed
aymuos15 opened this issue May 2, 2024 · 4 comments
Closed
Assignees
Labels
question Further information is requested

Comments

@aymuos15
Copy link

aymuos15 commented May 2, 2024

image

In this example (Using Unmatcged), I get the following result:

Panoptic: Start Evaluation
-- Got UnmatchedInstancePair, will match instances
-- Got MatchedInstancePair, will evaluate instances

+++ MATCHING +++
Number of instances in reference (num_ref_instances): 1
Number of instances in prediction (num_pred_instances): 1
True Positives (tp): 1
False Positives (fp): 0
False Negatives (fn): 0
Recognition Quality / F1-Score (rq): 1.0

+++ GLOBAL +++
Global Binary Dice (global_bin_dsc): 0.9325714285714286
Global Binary Centerline Dice (global_bin_cldsc): 0.4
Global Binary Average Symmetric Surface Distance (global_bin_assd): 0.1717868285884634

+++ INSTANCE +++
Segmentation Quality IoU (sq): 0.8736616702355461 +- 0.0
Panoptic Quality IoU (pq): 0.8736616702355461
Segmentation Quality Dsc (sq_dsc): 0.9325714285714286 +- 0.0
Panoptic Quality Dsc (pq_dsc): 0.9325714285714286
Segmentation Quality Assd (sq_assd): 0.1717868285884634 +- 0.0

Why is it showing only 1 instance in this case? In both pred and ref.

To reproduce: https://colab.research.google.com/drive/1oxmhQSQ8v4HKUID9ugee9vIrnguIRmqE?usp=sharing

from panoptica import NaiveThresholdMatching
from panoptica import Panoptic_Evaluator, UnmatchedInstancePair

gt = gt.astype(np.uint32)
pred = pred.astype(np.uint32)

sample = UnmatchedInstancePair(pred, gt)
evaluator = Panoptic_Evaluator(
    expected_input=UnmatchedInstancePair,
    # instance_matcher=MaximizeMergeMatching(),
    instance_matcher=NaiveThresholdMatching(),
)
result, _ = evaluator.evaluate(sample)
print(result)
@Hendrik-code Hendrik-code self-assigned this May 15, 2024
@Hendrik-code Hendrik-code added the question Further information is requested label May 15, 2024
@Hendrik-code
Copy link
Collaborator

Hey @aymuos15
that is a good point, we may need to clarify this in our explanations.

As soon as your tell panoptica your input is an instance mask, it treats all equal labels as the same instance. In your case, all three gt and prediction boxes have been assigned 1 and as an instance mask, if every box has the same label 1, it means it is of the same instance.
Hence for calculation, it only sees one instance in both prediction and gt, matches them, yielding the results you posted.

If that is not what you intended, you need to relabel your different instances as different labels. Importantly, if those are unmatched instances, it doesn't even matter what labels they are in prediction or reference (as long as they are different for each instance), because the instance matching module will take care of that.

I hope this helped.
Hendrik

@aymuos15
Copy link
Author

Thank you very much for the in-depth explanation.

So ideally running connected components (separately on the images and labels) would work?

@Hendrik-code
Copy link
Collaborator

Yeah, so if you want to separate between them, it sounds to me like your input (both prediction and reference mask) are actually Semantic masks, not instance ones. So if you pass it to panoptica as SemanticPair, it will automatically run connected components performance-optimized on your input to derive the individual instances.

@aymuos15
Copy link
Author

Alright! Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants