You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm dealing with a heavy imbalanced dataset of 3 class, with unchange class occupies for 90% of the total training images, but up to 99% of total pixels. For this reason, i choose Focal loss.
However, in your implementation, the alpha parameter for each class is calculated in a way that make them larger than 1. This goes against the original paper, where the alpha parameters are calculated to be less than 1.
Could you please explain the reason behind this choice?
Also, the dataloader used to get_alpha was the training set, which applied transform for each run, which lead to different values of alpha for each run. This seem to be a problem.
The datasets in your paper is also heavily imbalanced. Can you explain the reason why you choose CrossEntropy instead? Did this cause difficulties in training the model?
The text was updated successfully, but these errors were encountered:
I'm dealing with a heavy imbalanced dataset of 3 class, with unchange class occupies for 90% of the total training images, but up to 99% of total pixels. For this reason, i choose Focal loss.
However, in your implementation, the alpha parameter for each class is calculated in a way that make them larger than 1. This goes against the original paper, where the alpha parameters are calculated to be less than 1.
Could you please explain the reason behind this choice?
Also, the dataloader used to get_alpha was the training set, which applied transform for each run, which lead to different values of alpha for each run. This seem to be a problem.
The datasets in your paper is also heavily imbalanced. Can you explain the reason why you choose CrossEntropy instead? Did this cause difficulties in training the model?
The text was updated successfully, but these errors were encountered: