You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
@louismartin
Hello! thanks for sharing this work
I am working on new transformer architecture and I want to try PKM so reading the instruction and the repo I have a simple question. Regarding "mem_enc_positions" and "mem_dec_positions"
in the read me page was explained:
"To add a memory in (for instance) the layers 4 and 7 of an encoder, you can simply provide --use_memory true --mem_enc_positions 4,7"
according to that, I understand that recovered mem_positions should return a tuple(int,str)= (layer_id, pos) and not one value
according to the explanation above 4 and 7, are layers, so I was wondering what is the right syntax of "mem_enc_positions" and "mem_dec_positions" parameters.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
@louismartin
Hello! thanks for sharing this work
I am working on new transformer architecture and I want to try PKM so reading the instruction and the repo I have a simple question. Regarding "mem_enc_positions" and "mem_dec_positions"
in the read me page was explained:
but in this line we see
according to that, I understand that recovered mem_positions should return a tuple(int,str)= (layer_id, pos) and not one value
according to the explanation above 4 and 7, are layers, so I was wondering what is the right syntax of "mem_enc_positions" and "mem_dec_positions" parameters.
The text was updated successfully, but these errors were encountered: