You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in the outer loop, the outer_loop_loss.backward() and optim.step() cannot update the MAMLModel.params, which causes each episode cannot update the model.params.
Finally, the meta part cannot change the model.params and has no effect.
I’m wondering whether this is deliberately designed? If yes, it seems that we need to add some code except the “TODO” to update the initial params using the gradients of fast weights?
The text was updated successfully, but these errors were encountered:
I noticed that in the outer loop, the outer_loop_loss.backward() and optim.step() cannot update the MAMLModel.params, which causes each episode cannot update the model.params.
Finally, the meta part cannot change the model.params and has no effect.
I’m wondering whether this is deliberately designed? If yes, it seems that we need to add some code except the “TODO” to update the initial params using the gradients of fast weights?
The text was updated successfully, but these errors were encountered: