You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper "Acoustic echo cancellation with the dual-signal transformation LSTM network", it is mentioned that the size of the learned feature representation is also 512. Is it means the encoder_size is 512? And in your DNS-Challenge paper, the encoder_size is 256. I want to know the reason you changing the encoder size.
encoder_size form 256 to 512, will it influence the model size, number of parameters, objective and subjective metric and execution time?
Thanks a lot!
The text was updated successfully, but these errors were encountered:
I just wanted to squeeze a tiny little bit more of performance out of the model for the challenge, that is the reason for the higher encoder size.
So in general the higher encoder size will slightly increase the performance, but often it is not significant. It will definitely increase the model size and the needed calculations. But it will not influence the execution time in an significant amount.
In your paper "Acoustic echo cancellation with the dual-signal transformation LSTM network", it is mentioned that the size of the learned feature representation is also 512. Is it means the encoder_size is 512? And in your DNS-Challenge paper, the encoder_size is 256. I want to know the reason you changing the encoder size.
encoder_size form 256 to 512, will it influence the model size, number of parameters, objective and subjective metric and execution time?
Thanks a lot!
The text was updated successfully, but these errors were encountered: