-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reformulate lambda function #476
base: main
Are you sure you want to change the base?
reformulate lambda function #476
Conversation
for more information, see https://pre-commit.ci
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #476 +/- ##
===========================================
- Coverage 87.90% 50.09% -37.81%
===========================================
Files 40 50 +10
Lines 1745 3511 +1766
===========================================
+ Hits 1534 1759 +225
- Misses 211 1752 +1541
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Okay several things happened here. First, I needed to change eps from import numpy as np
In [2]: lmbda = np.float64(1.9999999999999996)
...: value = -2.0
...: abs(lmbda-2) <= np.finfo(np.float64).eps, abs(lmbda-2) <= np.finfo(np.float32).eps
Out[2]: (False, True)
# what is happening, x<0 and lambda != 2
In [3]: y = -(np.power(-value + 1., 2. - lmbda) - 1.) / (2. - lmbda)
...: y
Out[3]: -1.0
In [4]: 1 - np.power(
...: -(2 - lmbda) * y + 1, 1 / (2 - lmbda)
...: )
Out[4]: -1.7182818284590446 # wrong
# right when lambda is classified as equal to 2
In [5]: y = -np.log1p(-value)
...: y
Out[5]: -1.0986122886681098
In [6]: 1 - np.exp(-y)
Out[6]: -2.0000000000000004 # right Essentially, for a number that is close to two but not exactly two the transform has a floating error big enough to mess up the inverse transform. I am not 100% sure what is happening but it seems that some step in |
Now the question arises, why didn't we have that problem before? Well, before we did not only bound This ensured that the slope of the lambda function was not too steep, so that even high yearly temperature anomalies would not put lambda towards the bounds. With the now unbound fit the slope is steeper more often and lambda actually runs towards the bounds quite often (for my very skewed dummy data at least, have to still test it on real data). This makes me consider if we should switch back to a linear function entirely or if we should keep the bounds like before. Interestingly, the unbound optimization also does not seem to be significantly faster than the bound one atm. So I am not completely sure what to do here. |
Strange... so I am not sure at the moment if it is worth exploring |
Same. I'll convert it to a draft. |
I had to re-add the changes, as it did not survive moving and cleaning the file. But anyway - this leads to several overflow errors (not really a problem for the optimization) and seems to slow down the tests. So still not sure it's a good idea... |
Here I try to implement a formulation of the lambda function that does not need bounds. We use a logistic function between 0 and two, which currently looks like this:
for this function to actually be logistic we need to bound$\xi_0 \in [0, \infty)$ .
If we now reformulate this to:
then$\xi_{0_{new}} = log(\xi_0)$ and we do not need bounds.