You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for each regularization loss, before training start, calculate the ratio of this loss term with respect to the task loss (cross entropy for classiifcaiton)
This issue is to make a feature for DomainLab such that for each loss term, we have a reference ratio of this loss term over the task loss, then the user input gamma_reg will be interpreted as a multiplication factor only on top of this ratio, i.e.
list_multiplier[j]*ratio[j], where j is the index for the loss term in question, ratio is something we have to calculate before training starts, here:
for each regularization loss, before training start, calculate the ratio of this loss term with respect to the task loss (cross entropy for classiifcaiton)
e.g. regularization loss for DANN is
DomainLab/domainlab/models/model_dann.py
Line 108 in 1df808a
However, we do not need to care about this line above, but rather,
DomainLab/domainlab/models/a_model.py
Line 52 in 1df808a
This issue is to make a feature for DomainLab such that for each loss term, we have a reference ratio of this loss term over the task loss, then the user input gamma_reg will be interpreted as a multiplication factor only on top of this ratio, i.e.
list_multiplier[j]*ratio[j], where j is the index for the loss term in question, ratio is something we have to calculate before training starts, here:
DomainLab/domainlab/algos/trainers/train_basic.py
Line 22 in 1df808a
we probalby need to define something like self.list_reg_loss_over_task_loss_ratio in
DomainLab/domainlab/models/a_model.py
Line 17 in 1df808a
then before
DomainLab/domainlab/models/a_model.py
Line 52 in 1df808a
we do new_list_multiplier= [mtuple[0]*mtuple[1] for mtuple in zip(self.list_reg_loss_over_task_loss_ratio, list_multiplier)]
The text was updated successfully, but these errors were encountered: