Skip to content

Commit

Permalink
Merge branch 'nanograv:master' into piecewise_orbital_model
Browse files Browse the repository at this point in the history
  • Loading branch information
poneill129 authored Sep 13, 2023
2 parents 03894b2 + 25f38ca commit 83ba865
Show file tree
Hide file tree
Showing 104 changed files with 3,635 additions and 944 deletions.
4 changes: 3 additions & 1 deletion AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,20 @@ Contributors

Active developers are indicated by (*). Authors of the PINT paper are indicated by (#).

* Gabriella Agazie (*)
* Akash Anumarlapudi (*)
* Anne Archibald (#*)
* Matteo Bachetti (#)
* Bastian Beischer
* Deven Bhakta
* Deven Bhakta (*)
* Chloe Champagne (#)
* Jonathan Colen (#)
* Thankful Cromartie
* Christoph Deil
* Paul Demorest (#)
* Julia Deneva
* Justin Ellis
* William Fiore (*)
* Fabian Jankowski
* Rick Jenet (#)
* Ross Jennings (#*)
Expand Down
30 changes: 12 additions & 18 deletions CHANGELOG-unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,17 @@ the released changes.

## Unreleased
### Changed
- Applied `sourcery` refactors to the entire codebase
- Changed threshold for `test_model_derivatives` test to avoid CI failures
- Unreleased CHANGELOG entries should now be entered in `CHANGELOG-unreleased.md` instead of `CHANGELOG.md`. Updated documentation accordingly.
- `WAVE` parameters can be added to a `Wave` model with `add_wave_component()` in `wave.py`
- Moved design matrix normalization code from `pint.fitter` to the new `pint.utils.normalize_designmatrix()` function.
- Made `Residuals` independent of `GLSFitter` (GLS chi2 is now computed using the new function `Residuals._calc_gls_chi2()`).
### Added
- `SpindownBase` as the abstract base class for `Spindown` and `PeriodSpindown` in the `How_to_build_a_timing_model_component.py` example.
- `SolarWindDispersionBase` as the abstract base class for solar wind dispersion components.
- `validate_component_types` method for more rigorous validation of timing model components.
- roundtrip test to make sure clock corrections are not written to tim files
- `calc_phase_mean` and `calc_time_mean` methods in `Residuals` class to compute the residual mean.
- - `PhaseOffset` component (overall phase offset between physical and TZR toas)
- `tzr` attribute in `TOAs` class to identify TZR TOAs
- Documentation: Explanation for offsets
- Example: `phase_offset_example.py`
- method `AllComponents.param_to_unit` to get units for any parameter, and then made function `utils.get_unit`
- can override/add parameter values when reading models
- docs now include list of observatories along with google maps links and clock files
- Added `WaveX` model as a `DelayComponent` with Fourier amplitudes as fitted parameters
### Fixed
- fixed docstring for `add_param_from_top`
- Gridded calculations now respect logger settings
### Removed
- Wave model `validate()` can correctly use PEPOCH to assign WAVEEPOCH parameter
- Fixed RTD by specifying theme explicitly.
- `.value()` now works for pairParameters
- Setting `model.PARAM1 = model.PARAM2` no longer overrides the name of `PARAM1`
- Fixed an incorrect docstring in `pbprime()` functions.
- Fix ICRS -> ECL conversion when parameter uncertainties are not set.
- `get_TOAs` raises an exception upon finding mixed narrowband and wideband TOAs in a tim file. `TOAs.is_wideband` returns True only if *ALL* TOAs have the -pp_dm flag.
### Removed
51 changes: 51 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,57 @@ and this project, at least loosely, adheres to [Semantic Versioning](https://sem
This file contains the released changes to the codebase. See CHANGELOG-unreleased.md for
the unreleased changes. This file should only be changed while tagging a new version.

## [0.9.7] 2023-08-24
### Changed
- Third-order Roemer delay terms to ELL1 model
- Made the addition of a TZR TOA (`AbsPhase`) in the `TimingModel` explicit in `Residuals` class.
- Updated `CONTRIBUTING.rst` with the latest information.
- Made `TimingModel.params` and `TimingModel.ordered_params` identical. Deprecated `TimingModel.ordered_params`.
### Added
- Third-order Roemer delay terms to ELL1 model
- Options to add a TZR TOA (`AbsPhase`) during the creation of a `TimingModel` using `ModelBuilder.__call__`, `get_model`, and `get_model_and_toas`
- `pint.print_info()` function for bug reporting
- Added an autocorrelation function to check for chain convergence in `event_optimize`
- A hacky implementation of system-dependent FD parameters (FDJUMP)
- Minor doc updates to explain default NHARMS and missing derivative functions
### Fixed
- Deleting JUMP1 from flag tables will not prevent fitting
- Simulating TOAs from tim file when PLANET_SHAPIRO is true now works
- Docstrings for `get_toas()` and `get_model_and_toas()`
- Set `DelayComponent_list` and `NoiseComponent_list` to empty list if such components are absent
- Fix invalid access of `PLANET_SHAPIRO` in models without `Astrometry`
### Removed


## [0.9.6] 2023-06-22
### Changed
- Applied `sourcery` refactors to the entire codebase
- Changed threshold for `test_model_derivatives` test to avoid CI failures
- Unreleased CHANGELOG entries should now be entered in `CHANGELOG-unreleased.md` instead of `CHANGELOG.md`. Updated documentation accordingly.
- Changed tests to remove `unittest` and use pure pytest format
- Changed deprecated `sampler.chain` usage
- Download data automatically in the profiling script `high_level_benchmark.py` instead of silently giving wrong results.
### Added
- `SpindownBase` as the abstract base class for `Spindown` and `PeriodSpindown` in the `How_to_build_a_timing_model_component.py` example.
- `SolarWindDispersionBase` as the abstract base class for solar wind dispersion components.
- `validate_component_types` method for more rigorous validation of timing model components.
- roundtrip test to make sure clock corrections are not written to tim files
- `calc_phase_mean` and `calc_time_mean` methods in `Residuals` class to compute the residual mean.
- `PhaseOffset` component (overall phase offset between physical and TZR toas)
- `tzr` attribute in `TOAs` class to identify TZR TOAs
- Documentation: Explanation for offsets
- Example: `phase_offset_example.py`
- method `AllComponents.param_to_unit` to get units for any parameter, and then made function `utils.get_unit`
- can override/add parameter values when reading models
- docs now include list of observatories along with google maps links and clock files
### Fixed
- fixed docstring for `add_param_from_top`
- Gridded calculations now respect logger settings
- Event TOAs now have default error that is non-zero, and can set as desired
- Model conversion ICRS <-> ECL works if PM uncertainties are not set
- Fix `merge_TOAs()` to allow lists of length 1
### Removed

## [0.9.5] 2023-05-01
### Changed
- Changed minimum supported version of `scipy` to 1.4.1
Expand Down
50 changes: 36 additions & 14 deletions CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@ Report bugs at https://github.com/nanograv/pint/issues.

If you are reporting a bug, please include:

* Your operating system name and version.
* The output of ``pint.__version__`` and ``pint.__file__``
* The output of ``pint.print_info()``. This command provides the version information of
the OS, Python, PINT, and the various dependencies along with other information about
your system.
* Any details about your local setup that might be helpful in troubleshooting,
such as the command used to install PINT and whether you are using a virtualenv,
conda environment, etc.
Expand Down Expand Up @@ -76,22 +77,37 @@ to write good documentation, you come to understand the code very well.
Get Started!
------------

Ready to contribute? Here's how to set up PINT for local development.
Ready to contribute? Here's how to set up `PINT` for local development.

1. Fork_ the ``pint`` repo on GitHub.
1. Fork_ the `PINT` repo on GitHub.
2. Clone your fork locally::

$ git clone [email protected]:your_name_here/pint.git

3. Install your local copy into a virtualenv. Assuming you have
virtualenvwrapper installed, this is how you set up your fork for local
3. Install your local copy into a `conda`_ environment. Assuming you have
`conda` installed, this is how you set up your fork for local
development::

$ mkvirtualenv pint
$ conda create -n pint-devel python=3.10
$ conda activate pint-devel
$ cd PINT/
$ conda install -c conda-forge --file requirements_dev.txt
$ conda install -c conda-forge --file requirements.txt
$ pip install -e .
$ pre-commit install
The last command installs pre-commit hooks which will squawk at you while trying
to commit changes that don't adhere to our `Coding Style`_.

Alternatively, this can also be done using `virtualenv`. Assuming you have
`virtualenvwrapper` installed, this is how you set up your fork for local
development::

$ mkvirtualenv pint-devel
$ cd PINT/
$ pip install -r requirements_dev.txt
$ pip install -r requirements.txt
$ pip install -e .
$ pre-commit install

4. Create a branch for local development::

Expand All @@ -109,13 +125,13 @@ Ready to contribute? Here's how to set up PINT for local development.
6. Commit your changes and push your branch to GitHub::

$ git add .
$ git commit -m "Your detailed description of your changes."
$ git commit -m "Detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature

7. Submit a pull request through the GitHub website.

8. Check that our automatic testing "Travis CI" passes your code. If
problems crop up, fix them, commit the changes, and push a new version,
8. Check that our automatic testing in "GitHub Actions" passes for your code.
If problems crop up, fix them, commit the changes, and push a new version,
which will automatically update the pull request::

$ git add pint/file-i-just-fixed.py
Expand All @@ -127,22 +143,28 @@ Ready to contribute? Here's how to set up PINT for local development.
functional changes. If accepted, it will be merged into the master branch.

.. _Fork: https://help.github.com/en/articles/fork-a-repo
.. _`conda`: https://docs.conda.io/

Pull Request Guidelines
-----------------------

Before you submit a pull request, check that it meets these guidelines:

1. Try to write clear :ref:`pythonic` code, follow our :ref:`CodingStyle`, and think
1. Try to write clear `Pythonic`_ code, follow our `Coding Style`_, and think
about how others might use your new code.
2. The pull request should include tests that cover both the expected
behavior and sensible error reporting when given bad input.
3. If the pull request adds or changes functionality, the docs should
be updated. Put your new functionality into a function with a
docstring. Check the HTML documentation produced by ``make docs``
to make sure your new documentation appears and looks reasonable.
If the new functionality needs a more detailed explanation than can be
put in a docstring, add it to ``docs/explanation.rst``. Make sure that
the docstring contains a brief description as well.
4. The pull request should work for and 3.8+. Make sure that all the
CI tests for the pull request pass.
5. Update `CHANGELOG-unreleased.md` with an appropriate entry. Please note
that `CHANGELOG.md` should not be updated for pull requests.
5. Update ``CHANGELOG-unreleased.md`` with an appropriate entry. Please note
that ``CHANGELOG.md`` should not be updated for pull requests.

.. _`Pythonic`: https://peps.python.org/pep-0008/
.. _`Coding Style`: https://nanograv-pint.readthedocs.io/en/latest/coding-style.html
6 changes: 4 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,8 +182,10 @@ def setup(app):
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
on_rtd = os.environ.get("READTHEDOCS", None) == "True"
if not on_rtd: # only import and set the theme if we're building docs locally
html_theme = "sphinx_rtd_theme"
# if not on_rtd: # only import and set the theme if we're building docs locally
# html_theme = "sphinx_rtd_theme"
# it seems that we need to specify it anyway
html_theme = "sphinx_rtd_theme"

# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
Expand Down
5 changes: 3 additions & 2 deletions docs/examples/MCMC_walkthrough.broken
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,8 @@ To make this run relatively fast for demonstration purposes, nsteps was purposef

```python
fitter.phaseogram()
samples = sampler.sampler.chain[:, 10:, :].reshape((-1, fitter.n_fit_params))
samples = np.transpose(sampler.sampler.get_chain(discard=10), (1, 0, 2)).reshape(
(-1, fitter.n_fit_params))
ranges = map(
lambda v: (v[1], v[2] - v[1], v[1] - v[0]),
zip(*np.percentile(samples, [16, 50, 84], axis=0)),
Expand Down Expand Up @@ -192,7 +193,7 @@ fitter2.fit_toas(maxiter=nsteps2, pos=None)
```

```python
samples2 = sampler2.sampler.chain[:, :, :].reshape((-1, fitter2.n_fit_params))
samples2 = np.transpose(sampler2.sampler.get_chain(), (1, 0, 2)).reshape((-1, fitter2.n_fit_params))
ranges2 = map(
lambda v: (v[1], v[2] - v[1], v[1] - v[0]),
zip(*np.percentile(samples2, [16, 50, 84], axis=0)),
Expand Down
5 changes: 3 additions & 2 deletions docs/examples/fit_NGC6440E_MCMC.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,8 +87,9 @@ def plot_chains(chain_dict, file=False):
plot_chains(chains, file=f"{f.model.PSR.value}_chains.png")

# triangle plot
# this doesn't include burn-in because we're not using it here, otherwise would have middle ':' --> 'burnin:'
samples = sampler.sampler.chain[:, :, :].reshape((-1, f.n_fit_params))
# this doesn't include burn-in because we're not using it here, otherwise set get_chain(discard=burnin)
# samples = sampler.sampler.chain[:, :, :].reshape((-1, f.n_fit_params))
samples = np.transpose(sampler.get_chain(), (1, 0, 2)).reshape((-1, ndim))
with contextlib.suppress(ImportError):
import corner

Expand Down
13 changes: 5 additions & 8 deletions docs/examples/understanding_fitters.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@
# straightforward but the full covariance matrix may be enormous.
# If False, an algorithm is used that takes advantage of the structure
# of the covariance matrix, based on information provided by the noise
# model. The two algorithms should give the same result to numerical
# model. The two algorithms should give the same result up to numerical
# accuracy where they both can be applied.

# %% [markdown]
Expand Down Expand Up @@ -178,11 +178,6 @@
# %%
glsfit.fit_toas(maxiter=1)

# %%
# Not sure how to do this properly yet.
# glsfit2 = pint.fitter.GLSFitter(toas=t, model=glsfit.model, residuals=glsfit.resids)
# glsfit2.fit_toas(maxiter=0)

# %%
glsfit.print_summary()

Expand All @@ -206,7 +201,8 @@
# %% [markdown]
# ## Choosing fitters
#
# You can use the automatic fitter selection to help you choose between `WLSFitter`, `GLSFitter`, and their wideband variants. The default `Downhill` fitters generally have better performance than the plain variants.
# You can use the automatic fitter selection to help you choose between `WLSFitter`, `GLSFitter`, and their wideband variants.
# The default `Downhill` fitters generally have better performance than the plain variants.

# %%
autofit = pint.fitter.Fitter.auto(toas=ts1855, model=m1855)
Expand All @@ -221,4 +217,5 @@
# The results are (thankfully) identical.

# %% [markdown]
# The MCMC fitter is considerably more complicated, so it has its own dedicated walkthroughs in `MCMC_walkthrough.ipynb` (for photon data) and `examples/fit_NGC6440E_MCMC.py` (for fitting TOAs).
# The MCMC fitter is considerably more complicated, so it has its own dedicated walkthroughs in `MCMC_walkthrough.ipynb`
# (for photon data) and `examples/fit_NGC6440E_MCMC.py` (for fitting TOAs).
20 changes: 14 additions & 6 deletions docs/explanation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@ you will find PINT sufficient for all your needs!
.. _TEMPO: http://tempo.sourceforge.net/
.. _TEMPO2: https://www.atnf.csiro.au/research/pulsar/tempo2/

.. sectnum::

Time
----

Expand Down Expand Up @@ -718,13 +716,16 @@ model and data require different calculations - narrowband (TOA-only) versus
wideband (TOA and DM measurements) and uncorrelated errors versus correlated
errors.

The TEMPO/TEMPO2 and default PINT fitting algorithms (:class:`pint.fitter.WidebandTOAFitter` for example), leaving aside the rank-reduced case, proceed like:
The TEMPO/TEMPO2 and default PINT fitting algorithms (:class:`pint.fitter.WidebandTOAFitter`, for example),
leaving aside the rank-reduced case, proceed like:

1. Evaluate the model and its derivatives at the starting point :math:`x`, producing a set of residuals :math:`\delta y` and a Jacobian `M`.
2. Compute :math:`\delta x` to minimize :math:`\left| M\delta x - \delta y \right|_C`, where :math:`\left| \cdot \right|_C` is the squared amplitude of a vector with respect to the data uncertainties/covariance :math:`C`.
3. Update the starting point by :math:`\delta x`.

TEMPO and TEMPO2 can check whether the predicted improvement of chi-squared, assuming the linear model is correct, is enough to warrant continuing; if so, they jump back to step 1 unless the maximum number of iterations is reached. PINT does not contain this check.
TEMPO and TEMPO2 can check whether the predicted improvement of chi-squared, assuming
the linear model is correct, is enough to warrant continuing; if so, they jump back to
step 1 unless the maximum number of iterations is reached. PINT does not contain this check.

This algorithm is the Gauss-Newton_algorithm_ for solving nonlinear
least-squares problems, and even in one-complex-dimensional cases can exhibit
Expand All @@ -746,9 +747,16 @@ PINT contains a slightly more sophisticated algorithm, implemented in
4. Evaluate the model at the starting point plus :math:`\lambda \delta x`. If this is invalid or worse than the starting point, divide :math:`\lambda` by two and repeat this step. If :math:`\lambda` is too small, accept the best point seen to date and exit without convergence.
5. If the model improved but only slightly with :math:`\lambda=1`, exit with convergence. If the maximum number of iterations was reached, exit without convergence. Otherwise update the starting point and return to step 1.

This ensures that PINT tries taking smaller steps if problems arise, and claims convergence only if a normal step worked. It does not solve the problems that arise if some parameters are nearly degenerate, enough to cause problems with the numerical linear algebra.
This ensures that PINT tries taking smaller steps if problems arise, and claims convergence
only if a normal step worked. It does not solve the problems that arise if some parameters are
nearly degenerate, enough to cause problems with the numerical linear algebra.

As a rule, this kind of problem is addressed with the Levenberg-Marquardt algorithm, which operates on the same principle of taking reduced steps when the derivative appears not to match the function, but does so in a way that also reduces issues with degenerate parameters; unfortunately it is not clear how to adapt this problem to the rank-reduced case. Nevertheless PINT contains an implementation, in :class:`pint.fitter.WidebandLMFitter`, but it does not perform as well as one might hope in practice and must be considered experimental.
As a rule, this kind of problem is addressed with the Levenberg-Marquardt algorithm, which
operates on the same principle of taking reduced steps when the derivative appears not to
match the function, but does so in a way that also reduces issues with degenerate parameters;
unfortunately it is not clear how to adapt this problem to the rank-reduced case. Nevertheless,
PINT contains an implementation in :class:`pint.fitter.WidebandLMFitter`, but it does not perform as
well as one might hope in practice and must be considered experimental.

.. _Gauss-Newton_algorithm: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
.. _chaotic: https://en.wikipedia.org/wiki/Newton_fractal
Expand Down
2 changes: 2 additions & 0 deletions profiling/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
J0740+6620.cfr+19.tim
bench_*_summary
11 changes: 10 additions & 1 deletion profiling/high_level_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
import sys
import os
import platform
import urllib.request
from prfparser import parse_file


Expand Down Expand Up @@ -67,11 +68,19 @@ def get_results(script, outfile):
parser = argparse.ArgumentParser(
description="High-level summary of python file timing."
)

if not os.path.isfile("J0740+6620.cfr+19.tim"):
print("Downloading data file J0740+6620.cfr+19.tim ...")
urllib.request.urlretrieve(
"https://data.nanograv.org/static/data/J0740+6620.cfr+19.tim",
"J0740+6620.cfr+19.tim",
)

script1 = "bench_load_TOAs.py"
script2 = "bench_chisq_grid.py"
script3 = "bench_chisq_grid_WLSFitter.py"
script4 = "bench_MCMC.py"

script1 = "bench_load_TOAs.py"
# time scripts
output1 = bench_file(script1)
output2 = bench_file(script2)
Expand Down
Loading

0 comments on commit 83ba865

Please sign in to comment.