Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help in Understanding output. #111

Closed
aymuos15 opened this issue Jul 5, 2024 · 2 comments
Closed

Help in Understanding output. #111

aymuos15 opened this issue Jul 5, 2024 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@aymuos15
Copy link

aymuos15 commented Jul 5, 2024

Screenshot from 2024-07-05 15-48-25

In the image attached, could you please give me answers to the following questions?

  1. Why is there no difference (in the results) between the two types of matching (also why is the maximize matching not in the init file of the main branch?)
  2. Why are the connected components sent into the unmatched module returning null scores?

Link to full notebook: https://github.com/aymuos15/panoptica/blob/main/panoptica_issue.ipynb

Thank you very much

I am aware of your response here but I am still finding it hard to understand the above. #104

@Hendrik-code
Copy link
Collaborator

Let me try to break it down:

  • As soon as your input your data as (Unmatched/Matched)-InstancePair, panoptica treats each label as instance, not each cc. Hence "naive" has 1 reference instance and "naive_cc" has 2 (because you ran CC before it, which is exactly what panoptica would do for you if you input your data as SemanticPair btw)
  1. Why no difference in the matchers:
    Basically, panoptica allows only many-to-one and one-to-one matching (prediction to reference). So NO two or more reference instances can be paired with the same prediction instance (this would introduce a whole world of troubles, trust me).
    As you only have one prediction instance, you can always ever match 0 or 1 instance. Naive Matcher matches your prediction to the best reference instance, which it does. The maximize matcher would merge multiple prediction instances if and only if the combined matching score exceeds the singular ones. Again, you have only one prediction, so the maximize matcher cannot merge multiple prediction instances. If you were to swap your prediction and reference, then the maximize matcher might merge both [in this swapped case] prediction instances if the combined IoU is greater than with the single instance.

  2. Can you calculate the iou between your individual instances as test? (you can use the panoptica implementation at panoptica.metrics.iou)
    My theory is the following: The matchers have matching thresholds (default = 0.5 IoU). Without CC beforehand, panoptica matcher only looks at one reference instance, remember? So it actually merges the two blocks you have there, and the IoU probably exceeds the threshold. With CC, it will compare against each box individually, and the threshold is probably not exceeded. For the maximize matcher, at least one instance alone must exceed the threshold for a successful match (a behavior that now that I write this should be made more clear or rather should be determined by the user)

TL/DR: Given your initial data sample, you want to run panoptica with a SemanticPair (which runs CC for you) and the default matcher. If nothing is matched, then this is due to the matching threshold not being met by any possible prediction/reference pair. If you want to force matches, you need to set the matching threshold accordingly to zero.

We actually already plan to extend this in a way to make this easier to use. For example, we want to introduce label sets where the user can set if this class is always one instance (like an organ or so) or if can have multiple instances (which then runs the matcher onto it). Does this make sense, do you agree this would be valuable?

Cheers,
Hendrik

@Hendrik-code Hendrik-code self-assigned this Jul 5, 2024
@Hendrik-code Hendrik-code added the question Further information is requested label Jul 5, 2024
@aymuos15
Copy link
Author

aymuos15 commented Jul 7, 2024

Thank you so much for the detailed explanation. This clears all my doubts (and things make a lot more sense now haha).

I think the label set is very much required and is a great idea to add on. I am constantly facing this issue based on my target objective.

Much appreciated,
Soumya

@aymuos15 aymuos15 closed this as completed Jul 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants