Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fit white noise parameters #1639

Merged
merged 51 commits into from
Oct 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
4e908ea
lognorm
abhisrkckl Sep 5, 2023
cdaa97b
noise_fitter
abhisrkckl Sep 6, 2023
da3f416
changelog
abhisrkckl Sep 6, 2023
32a63cd
properties
abhisrkckl Sep 6, 2023
5a3b43f
robust ephem and planets in make_fake_toas_fromtim
abhisrkckl Sep 6, 2023
d99f59e
CHANGELOG
abhisrkckl Sep 7, 2023
c845926
fix fwhm method
abhisrkckl Sep 7, 2023
c7c1764
Parameter.positive
abhisrkckl Sep 7, 2023
646245e
fix parameter
abhisrkckl Sep 7, 2023
9e1d044
fix parameter
abhisrkckl Sep 7, 2023
f44bbb4
--
abhisrkckl Sep 7, 2023
7b284de
--
abhisrkckl Sep 7, 2023
bbab8a8
remove positive attr
abhisrkckl Sep 7, 2023
4e70de5
move noise fitter code to downhill fitter
abhisrkckl Sep 7, 2023
f3878c1
req
abhisrkckl Sep 7, 2023
e9ce6e3
tox
abhisrkckl Sep 7, 2023
90b1092
test_noisefit
abhisrkckl Sep 7, 2023
8185b51
docs
abhisrkckl Sep 7, 2023
35b3418
Merge branch 'master' into fit-white-noise
abhisrkckl Sep 21, 2023
a6ac9cc
Merge branch 'master' into fit-white-noise
abhisrkckl Sep 21, 2023
735a691
Merge branch 'master' into fit-white-noise
abhisrkckl Sep 25, 2023
a4eea12
Merge branch 'nanograv:master' into fit-white-noise
abhisrkckl Sep 26, 2023
a16d8c0
test_ddgrfit_noMTOT
abhisrkckl Sep 26, 2023
a754626
docs
abhisrkckl Oct 2, 2023
d20d761
Merge branch 'master' into fit-white-noise
abhisrkckl Oct 2, 2023
5743db1
docs
abhisrkckl Oct 2, 2023
ef13ff8
d_toasigma_d_param
abhisrkckl Oct 2, 2023
219c427
changelog
abhisrkckl Oct 2, 2023
2dd7c3d
test_white_noise_model_derivs
abhisrkckl Oct 2, 2023
9d15359
_like exception
abhisrkckl Oct 2, 2023
14f240e
Residuals.lnlikelihood
abhisrkckl Oct 2, 2023
0844a0a
akaike_information_criterion
abhisrkckl Oct 2, 2023
5adc843
fix import
abhisrkckl Oct 2, 2023
3621b1b
normalizable distr
abhisrkckl Oct 2, 2023
2eab60e
test_aic
abhisrkckl Oct 2, 2023
973e654
--
abhisrkckl Oct 2, 2023
4e29370
--
abhisrkckl Oct 2, 2023
b0eac06
akaike_information_criterion
abhisrkckl Oct 2, 2023
1ade57c
doc
abhisrkckl Oct 2, 2023
e55a4b4
--
abhisrkckl Oct 4, 2023
42f9197
_like not required in _fit_noise
abhisrkckl Oct 4, 2023
24243e0
Merge branch 'white_noise_derivs' into fit-wn-deriv
abhisrkckl Oct 4, 2023
cc39a62
_mloglike_grad
abhisrkckl Oct 4, 2023
10095dd
noisefit_method
abhisrkckl Oct 5, 2023
344f611
correct norm
abhisrkckl Oct 6, 2023
f570cc9
faster
abhisrkckl Oct 9, 2023
5096a73
Merge branch 'master' into fit-white-noise
abhisrkckl Oct 13, 2023
2ee8dfb
Merge branch 'master' into fit-white-noise
abhisrkckl Oct 17, 2023
a6b7dc1
include noise params in fittable_params
abhisrkckl Oct 17, 2023
8db19f7
Merge branch 'master' into fit-white-noise
abhisrkckl Oct 19, 2023
9b4739e
snippet
abhisrkckl Oct 20, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions CHANGELOG-unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,14 @@ the released changes.
- Piecewise orbital model (`BinaryBTPiecewise`)
- `TimingModel.fittable_params` property
- Simulate correlated noise using `pint.simulation` (also available via the `zima` script)
- Optionally return the the log normalization factor of the likelihood function from the `Residuals.calc_chi2()` method.
- `DownhilWLSFitter` can now estimate white noise parameters and their uncertainties.
- `Residuals.lnlikelihood()` method
- `pint.utils.akaike_information_criterion()` function
- `TimingModel.d_toasigma_d_param` method to compute derivatives of scaled TOA uncertainties w.r.t. white noise parameters.
- `TimingModel.toasigma_derivs` property to get all derivatives functions of scaled TOA uncertainties.
- `ScaleToaError.register_toasigma_deriv_funcs` method to populate derivatives of scaled TOA uncertainties.
- `ScaleToaError.d_toasigma_d_EFAC` and `ScaleToaError.d_toasigma_d_EQUAD` methods.
### Fixed
- Wave model `validate()` can correctly use PEPOCH to assign WAVEEPOCH parameter
- Fixed RTD by specifying theme explicitly.
Expand All @@ -39,4 +47,5 @@ the released changes.
- `get_TOAs` raises an exception upon finding mixed narrowband and wideband TOAs in a tim file. `TOAs.is_wideband` returns True only if *ALL* TOAs have the -pp_dm flag.
- `TimingModel.designmatrix()` method will fail with an informative error message if there are free unfittable parameters in the timing model.
- `make_fake_toas_uniform` and `make_fake_toas_fromMJDs` respects units of errors
- Robust access of EPHEM and PLANET_SHAPIRO in `make_fake_toas_fromtim`
### Removed
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ corner>=2.0.1
uncertainties
loguru
nestle>=0.2.0
numdifftools
184 changes: 153 additions & 31 deletions src/pint/fitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@
import scipy.linalg
import scipy.optimize as opt
from loguru import logger as log
from numdifftools import Hessdiag

import pint
import pint.utils
Expand Down Expand Up @@ -1166,45 +1167,19 @@ def __init__(self, toas, model, track_mode=None, residuals=None):
)
self.method = "downhill_checked"

def fit_toas(
def _fit_toas(
self,
maxiter=20,
required_chi2_decrease=1e-2,
max_chi2_increase=1e-2,
min_lambda=1e-3,
debug=False,
):
"""Carry out a cautious downhill fit.

This tries to take the same steps as
:func:`pint.fitter.WLSFitter.fit_toas` or
:func:`pint.fitter.GLSFitter.fit_toas` or
:func:`pint.fitter.WidebandTOAFitter.fit_toas`. At each step, it
checks whether the new model has a better ``chi2`` than the current
one; if the new model is invalid or worse than the current one, it
tries taking a shorter step in the same direction. This can exit if it
exceeds the maximum number of iterations or if improvement is not
possible even with very short steps, or it can exit successfully if a
full-size step is taken and it does not decrease the ``chi2`` by much.
"""Downhill fit implementation for fitting the timing model parameters.
The `fit_toas()` calls this method iteratively to fit the timing model parameters
while also fitting for white noise parameters.

The attribute ``self.converged`` is set to True or False depending on
whether the process actually converged.

Parameters
==========

maxiter : int
Abandon the process if this many successful steps have been taken.
required_chi2_decrease : float
A full-size step that makes less than this much improvement is taken
to indicate that the fitter has converged.
max_chi2_increase : float
If this is positive, consider taking steps that slightly worsen the chi2 in hopes
of eventually finding our way downhill.
min_lambda : float
If steps are shrunk by this factor and still don't result in improvement, abandon hope
of convergence and stop.
"""
See documentation of the `fit_toas()` method for more details."""
# setup
self.model.validate()
self.model.validate_toas(self.toas)
Expand All @@ -1213,6 +1188,7 @@ def fit_toas(
self.converged = False
# algorithm
exception = None

for i in range(maxiter):
step = current_state.step
lambda_ = 1
Expand Down Expand Up @@ -1266,6 +1242,7 @@ def fit_toas(
log.debug(
f"Stopping because maximum number of iterations ({maxiter}) reached"
)

self.current_state = best_state
# collect results
self.model = self.current_state.model
Expand All @@ -1277,6 +1254,7 @@ def fit_toas(
self.parameter_correlation_matrix = (
self.parameter_covariance_matrix.to_correlation_matrix()
)

for p, e in zip(self.current_state.params, self.errors):
try:
# I don't know why this fails with multiprocessing, but bypass if it does
Expand All @@ -1297,10 +1275,154 @@ def fit_toas(
raise MaxiterReached(f"Convergence not detected after {maxiter} steps.")
return self.converged

def fit_toas(
self,
maxiter=20,
noise_fit_niter=5,
required_chi2_decrease=1e-2,
max_chi2_increase=1e-2,
min_lambda=1e-3,
noisefit_method="Newton-CG",
debug=False,
):
"""Carry out a cautious downhill fit.

This tries to take the same steps as
:func:`pint.fitter.WLSFitter.fit_toas` or
:func:`pint.fitter.GLSFitter.fit_toas` or
:func:`pint.fitter.WidebandTOAFitter.fit_toas`. At each step, it
checks whether the new model has a better ``chi2`` than the current
one; if the new model is invalid or worse than the current one, it
tries taking a shorter step in the same direction. This can exit if it
exceeds the maximum number of iterations or if improvement is not
possible even with very short steps, or it can exit successfully if a
full-size step is taken and it does not decrease the ``chi2`` by much.

The attribute ``self.converged`` is set to True or False depending on
whether the process actually converged.

This function can also estimate white noise parameters (EFACs and EQUADs)
and their uncertainties.

If there are no free white noise parameters, this function will do one
iteration of the downhill fit (implemented in the `_fit_toas()` method).
If free white noise parameters are present, it will fit for them by numerically
maximizing the likelihood function (implemented in the `_fit_noise()` method).
The timing model fit and the noise model fit are run iteratively in an alternating
fashion. Fitting for a white noise parameter is as simple as::

fitter.model.EFAC1.frozen = False
fitter.fit_toas()


Parameters
==========

maxiter : int
Abandon the process if this many successful steps have been taken.
required_chi2_decrease : float
A full-size step that makes less than this much improvement is taken
to indicate that the fitter has converged.
max_chi2_increase : float
If this is positive, consider taking steps that slightly worsen the chi2 in hopes
of eventually finding our way downhill.
min_lambda : float
If steps are shrunk by this factor and still don't result in improvement, abandon hope
of convergence and stop.
noisefit_method: str
Algorithm used to fit for noise parameters. See the documentation for
`scipy.optimize.minimize()` for more details and available options.
"""
free_noise_params = self._get_free_noise_params()

if len(free_noise_params) == 0:
return self._fit_toas(
maxiter=maxiter,
required_chi2_decrease=required_chi2_decrease,
max_chi2_increase=required_chi2_decrease,
min_lambda=required_chi2_decrease,
debug=debug,
)
else:
log.debug("Will fit for noise parameters.")
for _ in range(noise_fit_niter):
self._fit_toas(
maxiter=maxiter,
required_chi2_decrease=required_chi2_decrease,
max_chi2_increase=max_chi2_increase,
min_lambda=min_lambda,
debug=debug,
)
values, errors = self._fit_noise(noisefit_method=noisefit_method)
self._update_noise_params(values, errors)

return self._fit_toas(
maxiter=maxiter,
required_chi2_decrease=required_chi2_decrease,
max_chi2_increase=max_chi2_increase,
min_lambda=min_lambda,
debug=debug,
)

@property
def fac(self):
return self.current_state.fac

def _get_free_noise_params(self):
"""Returns a list of all free noise parameters."""
return [
fp
for fp in self.model.get_params_of_component_type("NoiseComponent")
if not getattr(self.model, fp).frozen
]

def _update_noise_params(self, values, errors):
"""Update the model using estimated noise parameters."""
free_noise_params = self._get_free_noise_params()
for fp, val, err in zip(free_noise_params, values, errors):
getattr(self.model, fp).value = val
getattr(self.model, fp).uncertainty_value = err

def _fit_noise(self, noisefit_method="Newton-CG"):
"""Estimate noise parameters and their uncertainties. Noise parameters
are estimated by numerically maximizing the log-likelihood function including
the normalization term. The uncertainties thereof are computed using the
numerically-evaluated Hessian."""
free_noise_params = self._get_free_noise_params()

xs0 = [getattr(self.model, fp).value for fp in free_noise_params]

model1 = copy.deepcopy(self.model)
res = Residuals(self.toas, model1)

def _mloglike(xs):
"""Negative of the log-likelihood function."""
for fp, x in zip(free_noise_params, xs):
getattr(res.model, fp).value = x

return -res.lnlikelihood()

def _mloglike_grad(xs):
"""Gradient of the negative of the log-likelihood function w.r.t. white noise parameters."""
for fp, x in zip(free_noise_params, xs):
getattr(res.model, fp).value = x

return np.array(
[
-res.d_lnlikelihood_d_whitenoise_param(par).value
for par in free_noise_params
]
)

maxlike_result = opt.minimize(
_mloglike, xs0, method=noisefit_method, jac=_mloglike_grad
)

hess = Hessdiag(_mloglike)
errs = np.sqrt(1 / hess(maxlike_result.x))

return maxlike_result.x, errs


class WLSState(ModelState):
def __init__(self, fitter, model, threshold=None):
Expand Down
Loading
Loading