You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I have been working on a personal experiment where I plan to be able to optimize trainable parameters (such as material characteristic and antenna power) in order to characterize realistic coverage propagation for a very specific area of a city (small urban canyon).
For this purpose, I got a dataset of RSRP measurements and I have developed a training step based on NMSE where basically I take the following steps:
Compute CoverageMap (CM) (5e7 ray samples // 2 runs)
Exclude CM cells without coverage.
Find CM cells with a sample (RSRP measurement) within a certain radius (as the CM cells are spatially static then in each step it finds the same RSRP sample).
Apply the NMSE (per CM cell and then accumulate them).
Where:
̅xi is the predicted value (the simulated RSS).
xi is the RSRP sample
n is the number of CM cells that participated in the step.
Apply gradients.
Also, it is important to point out that:
Significant out layers from dataset have been removed.
Spatial accuracy and space distribution of the samples have been verified with a GIS tool.
Then I have tried 2 approaches:
All roofs, all walls and ground in the scene share the same material within each group so they are train as a whole.
Each scene element has its own material so they are trained individually.
Both approaches start with recommended material parameters set by ITU and other inputs from the host network.
Now, the problem is that the training in both approaches seems to stop improving the NMSE at a certain step, additionally, all the following NMSE results oscillate around this certain value and i get results similar to this:
I understand this behavior as a result of my model reaching a critical point where it starts to "fight" itself in order to satisfy each CM cell trying to get to the target value (RSRP sample) but I am not certain of it; I have tried to give more freedom to the training (adding more trainable inputs) but in general, even though there is improvement, the training never seems to reach target.
Also, I reduced the scope of my experiment to one CM cell (any size) and in this scenario the training performs perfectly but as I increase the number of CM cells, the performance decays.
I was wondering if you can provide any insights about my process
I was very inspired by the "Learning of Material Properties via Gradient Descent" and particularly by the closing discussion which is what let me to work with coverage maps, I hope you can help me.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi!
I have been working on a personal experiment where I plan to be able to optimize trainable parameters (such as material characteristic and antenna power) in order to characterize realistic coverage propagation for a very specific area of a city (small urban canyon).
For this purpose, I got a dataset of RSRP measurements and I have developed a training step based on NMSE where basically I take the following steps:
Where:
̅xi is the predicted value (the simulated RSS).
xi is the RSRP sample
n is the number of CM cells that participated in the step.
Also, it is important to point out that:
Then I have tried 2 approaches:
Both approaches start with recommended material parameters set by ITU and other inputs from the host network.
Now, the problem is that the training in both approaches seems to stop improving the NMSE at a certain step, additionally, all the following NMSE results oscillate around this certain value and i get results similar to this:
I understand this behavior as a result of my model reaching a critical point where it starts to "fight" itself in order to satisfy each CM cell trying to get to the target value (RSRP sample) but I am not certain of it; I have tried to give more freedom to the training (adding more trainable inputs) but in general, even though there is improvement, the training never seems to reach target.
Also, I reduced the scope of my experiment to one CM cell (any size) and in this scenario the training performs perfectly but as I increase the number of CM cells, the performance decays.
I was wondering if you can provide any insights about my process
I was very inspired by the "Learning of Material Properties via Gradient Descent" and particularly by the closing discussion which is what let me to work with coverage maps, I hope you can help me.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions