You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm confused by the code achieving thhe NNL method, The NNL method assumes that the errors follow a Gaussian distribution and the variance is linearly correlated with d". However, in the code, the NNL is achieved by "nll = -np.sum(stats.norm.logpdf(calib_y, loc=0, scale=s1 + calib_dist * s2))", with the scale being s1 + calib_dist * s2.
If I understand it correctly, this code meas that the the variance is linearly correlated with s1 + calib_dist * s2, since the scale corresponds to the standard deviation, which is the squared root of the variance. Does this conflict with the assumption that the variance is linearly correlated with d?
Thank you and look forward to your answer!
The text was updated successfully, but these errors were encountered:
I'm confused by the code achieving thhe NNL method, The NNL method assumes that the errors follow a Gaussian distribution and the variance is linearly correlated with d". However, in the code, the NNL is achieved by "nll = -np.sum(stats.norm.logpdf(calib_y, loc=0, scale=s1 + calib_dist * s2))", with the scale being s1 + calib_dist * s2.
If I understand it correctly, this code meas that the the variance is linearly correlated with s1 + calib_dist * s2, since the scale corresponds to the standard deviation, which is the squared root of the variance. Does this conflict with the assumption that the variance is linearly correlated with d?
Thank you and look forward to your answer!
The text was updated successfully, but these errors were encountered: