Replies: 1 comment 4 replies
-
Hi @GCX-art, SGD is typically used as the optimizer to find optimal weights for both the neural demapper and the constellation. Are you considering replacing it with a non-gradient-based optimizer, or could you clarify what “NN” stands for in this context? When you remove the neural demapper, did you switch to using an APP demapper instead? Also, are you working with an AWGN channel? How does the resulting constellation appear in your setup? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to train constellation using NN instead of SGD.
Sionna used SGD to train constellation and the neural demapper in ''End-to-end Learning with Autoencoders'', I have removed the neural demapper and then trained constellation only,but the result was not good.
So I hope to train constellation using NN instead of SGD, I have trained DNN but it not work. It is really difficult for me ,I hope to get your help , thanks you.
Beta Was this translation helpful? Give feedback.
All reactions