-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The model based on 'DTLN' project and your paper about 'DTLN-aec' can't be trained as good as your pretrained one. #27
Comments
@steven8274 hello! may I ask how to understand the "All signals used as input to the model are subject to a random gainchosen from a uniform distribution ranging from -25 to 0 dB relative to the clipping point" about "relative to the clipping point" in paper. where "the clipping point" conme from. thank you! |
In my opinion, the clipping point means the original signal, which is relative to the adjusted signal processed by the chosen gain. |
@steven8274 import numpy as np is “reference_cut_point” like this: if I get mic signal (Echo signal+near-end signal) value is m,whether the “reference_cut_point” is 20log(m/32767) ,16k sampling rate |
Yes, that's just my understanding.Sorry for my late relpy... |
@steven8274 thank you for your reply. |
Hi, Nils, after carefully reading the paper and the code in 'DTLN' project, I modified the model of DTLN to DTLN-aec's.I check the model structure again and again to make sure that it's coinsident with your paper.Then I compose the trainging data as your describe in the paper except for 'Random spectral shaping' which I'm not sure how to implement.The dataset include farend speech and echo from 'synthetic' dataset and 'real' dataset.It also include my synthesized echo data with 'DNS-Challenge 3' speech data.But the model I got performed not so well as the pretraned one.Could you help me to verify the model structure or the trainging dataset composing process?Thanks in advance!
The text was updated successfully, but these errors were encountered: