You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, "While the parameters of the classifiers are optimized in order to minimize their error on the training set, the parameters of the underlying deep feature mapping are optimized in order to minimize the loss of the label classifier and to maximize the loss of the domain classifier. "
Why the loss functions are minimized? The loss function of the domain classifier should not be maximized?
The text was updated successfully, but these errors were encountered:
The parameters of the classifier components (both the label classifier and the domain classifier), are changed to minimize their respective classification loss, as you would expect. The key is that the feature extraction network, which provides the input to the two classifiers, is optimized in a different way. Its parameters are changed to minimize the label classifier loss (so that you do well at the primary task) and maximize the domain classifier loss (so that the the differences between the two domains are ignored).
In the paper, "While the parameters of the classifiers are optimized in order to minimize their error on the training set, the parameters of the underlying deep feature mapping are optimized in order to minimize the loss of the label classifier and to maximize the loss of the domain classifier. "
Why the loss functions are minimized? The loss function of the domain classifier should not be maximized?
The text was updated successfully, but these errors were encountered: