Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The question about loss function #29

Open
xiutangzju opened this issue Jan 10, 2019 · 1 comment
Open

The question about loss function #29

xiutangzju opened this issue Jan 10, 2019 · 1 comment

Comments

@xiutangzju
Copy link

In the paper, "While the parameters of the classifiers are optimized in order to minimize their error on the training set, the parameters of the underlying deep feature mapping are optimized in order to minimize the loss of the label classifier and to maximize the loss of the domain classifier. "
Why the loss functions are minimized? The loss function of the domain classifier should not be maximized?

@pumpikano
Copy link
Owner

The parameters of the classifier components (both the label classifier and the domain classifier), are changed to minimize their respective classification loss, as you would expect. The key is that the feature extraction network, which provides the input to the two classifiers, is optimized in a different way. Its parameters are changed to minimize the label classifier loss (so that you do well at the primary task) and maximize the domain classifier loss (so that the the differences between the two domains are ignored).

I think the diagram here is very helpful to visualize this dynamics: http://sites.skoltech.ru/compvision/projects/grl/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants