Skip to content

Commit

Permalink
Euclid requisites -- CAMB & CLASS (#222)
Browse files Browse the repository at this point in the history
* classy: sigma8(z) support, and test (for CAMB too)

* camb: Omega_{b,cdm,nu_massive,m}

* PowerSpectrumInterpolator: warning for __call__ method

* PowerSpectrumInterpolator: added extrap_kmin and some error control

* BoltzmannBase: remove Omega_b and abstracted and documented the other ones

* PowerSpectrumInterpolator: test robustness

* BoltzmannBase+CAMB+classy: abstracted how z's etc are combined

* PowerSpectrumInterpolator: more robust bounds check

* CAMB/CLASS: get_z_dependent abstracted and more robust

* cosmo:get_z_dependent: fixed corner case

* boltzmanncode: adaptive tolerance for choice from 1d list

* camb: abstracted Pool1D for retrieving values [skip travis]

* classy: abstracted Pool1D for retrieving values

* tools: value pools abstracted to N-d

* boltzmann: fixes for 2D pool and starting with angular_diameter_distance_2

* camb: Omega_{b,cdm,nu_massive,m}

* PowerSpectrumInterpolator: warning for __call__ method

* PowerSpectrumInterpolator: added extrap_kmin and some error control

* BoltzmannBase: remove Omega_b and abstracted and documented the other ones

* PowerSpectrumInterpolator: test robustness

* BoltzmannBase+CAMB+classy: abstracted how z's etc are combined

* PowerSpectrumInterpolator: more robust bounds check

* CAMB/CLASS: get_z_dependent abstracted and more robust

* cosmo:get_z_dependent: fixed corner case

* boltzmanncode: adaptive tolerance for choice from 1d list

* camb: abstracted Pool1D for retrieving values [skip travis]

* classy: abstracted Pool1D for retrieving values

* tools: value pools abstracted to N-d

* boltzmann: fixes for 2D pool and starting with angular_diameter_distance_2

* linting

* camb: angular_diameter_distance_2 working with CAMB:master

* camb: angular_diameter_distance_2 returns 0 for z1>= z2

* added --minimize flag to cobaya-run

* class: update to v3.1.1

* boltzmann: Pk_interpolator doc

* trivial typos

* fixes

* Pool1D: try fast search first

* PoolXD: fast check for Pool2D and tests for pools

* classy: Omega_X and sigma8 (and tests for CAMB too)

* classy: ang_diam_dist_2, and some code merging

* classy: fsigma8

* classy: Weyl Pkz

* classy: new in 3.0: halofix|hmcode_min_k_max --> nonlinear_min_k_max

* classy: sigma(R,z) interfaced and test (CAMB too) [skip travis]

* boltzmann: better error msg when recovering z-pair-dependent [skip travis]

* typo

* CHANGELOG, get_sigma_R return order, and other small stuff

Co-authored-by: Antony Lewis <[email protected]>
  • Loading branch information
JesusTorrado and cmbant authored Feb 22, 2022
1 parent fb0e5b9 commit d26e833
Show file tree
Hide file tree
Showing 33 changed files with 1,181 additions and 225 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ docs/src_examples/*/*.png
# Tests
.pytest_cache
tests/.ipynb_checkpoints
.coverage*

# IDE-specific
.idea
Expand Down
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,22 @@

- Documented uses of `Model` class in general contexts (previously only cosmo)
- `Model` methods to compute log-probabilities and derived parameters now have an `as_dict` keyword (default `False`), for more informative return value.
- Added ``--minimize`` flag to ``cobaya-run`` for quick minimization (replaces sampler, uses previous output).

### Cosmological likelihoods and theory codes

- `Pk_interpolator`: added extrapolation up to `extrap_kmin` and improved robustness

#### CAMB

- Removed problematic `zrei: zre` alias (fixes #199, thanks @pcampeti)
- Added `Omega_b|cdm|nu_massive(z)` and `angular_diameter_distance_2`
- Returned values for `get_sigma_R` changed from `R, z, sigma(z, R)` to `z, R, sigma(z, R)`.

#### CLASS

- Updated to v3.1.2
- Added `Omega_b|cdm|nu_massive(z)`, `angular_diameter_distance_2`, `sigmaR(z)`, `sigma8(z)`, `fsgima8(z)` and Weyl potential power spectrum.

#### BAO

Expand Down
2 changes: 1 addition & 1 deletion cobaya/cosmo_input/input_database.py
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@

# EXPERIMENTS ############################################################################
base_precision: InfoDict = {"camb": {"halofit_version": "mead"},
"classy": {"non linear": "hmcode", "hmcode_min_k_max": 20}}
"classy": {"non linear": "hmcode", "nonlinear_min_k_max": 20}}
cmb_precision = deepcopy(base_precision)
cmb_precision["camb"].update({"bbn_predictor": "PArthENoPE_880.2_standard.dat",
"lens_potential_accuracy": 1})
Expand Down
6 changes: 3 additions & 3 deletions cobaya/input.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,10 +168,10 @@ def get_info_path(folder, prefix, infix=None, kind="updated", ext=Extension.yaml

def get_used_components(*infos, return_infos=False):
"""
Returns all requested components as an dict ``{kind: set([components])}``.
Returns all requested components as a dict ``{kind: set([components])}``.
Priors are not included.
If ``return_infos=True`` (default: ``False``), returns too a dictionary of inputs per
If ``return_infos=True`` (default: ``False``), also returns a dictionary of inputs per
component, updated in the order in which the info arguments are given.
Components which are just renames of others (i.e. defined with `class_name`) return
Expand Down Expand Up @@ -640,7 +640,7 @@ def get_class_path(cls) -> str:
def get_file_base_name(cls) -> str:
"""
Gets the string used as the name for .yaml, .bib files, typically the
class name or a un-CamelCased class name
class name or an un-CamelCased class name
"""
return cls.__dict__.get('file_base_name') or cls.__name__

Expand Down
4 changes: 2 additions & 2 deletions cobaya/likelihoods/base_classes/des.py
Original file line number Diff line number Diff line change
Expand Up @@ -567,10 +567,10 @@ def chi_squared(self, theory, return_theory_vector=False):

def logp(self, **params_values):
PKdelta = self.provider.get_Pk_interpolator(("delta_tot", "delta_tot"),
extrap_kmax=500 * self.acc)
extrap_kmax=3000 * self.acc)
if self.use_Weyl:
PKWeyl = self.provider.get_Pk_interpolator(("Weyl", "Weyl"),
extrap_kmax=500 * self.acc)
extrap_kmax=3000 * self.acc)
else:
PKWeyl = None

Expand Down
4 changes: 2 additions & 2 deletions cobaya/likelihoods/base_classes/sn.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@
.. note::
- If you use ``sn.pantheon``, please cite:|br|
Scolnic, D. M. et al,
Scolnic, D. M. et al.,
`The Complete Light-curve Sample of Spectroscopically
Confirmed Type Ia Supernovae from Pan-STARRS1 and
Cosmological Constraints from The Combined Pantheon Sample`
`(arXiv:1710.00845) <https://arxiv.org/abs/1710.00845>`_
- If you use ``sn.jla`` or ``sn.jla_lite``, please cite:|br|
Betoule, M. et al,
Betoule, M. et al.,
`Improved cosmological constraints from a joint analysis
of the SDSS-II and SNLS supernova samples`
`(arXiv:1401.4064) <https://arxiv.org/abs/1401.4064>`_
Expand Down
2 changes: 1 addition & 1 deletion cobaya/log.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def logger_setup(debug=None, debug_file=None):
"""
Configuring the root logger, for its children to inherit level, format and handlers.
Level: if debug=True, take DEBUG. If numerical, use "logging"'s corresponding level.
Level: if debug=True, take DEBUG. If numerical, use ""logging""'s corresponding level.
Default: INFO
"""
if debug is True or os.getenv('COBAYA_DEBUG'):
Expand Down
2 changes: 1 addition & 1 deletion cobaya/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -586,7 +586,7 @@ def get_valid_point(self, max_tries: int, ignore_fixed_ref: bool = False,
) -> Union[Tuple[np.ndarray, LogPosterior],
Tuple[np.ndarray, dict]]:
"""
Finds a point with finite posterior, sampled from from the reference pdf.
Finds a point with finite posterior, sampled from the reference pdf.
It will fail if no valid point is found after `max_tries`.
Expand Down
2 changes: 1 addition & 1 deletion cobaya/mpi.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def allgather(data) -> list:

def zip_gather(list_of_data, root=0) -> Iterable[tuple]:
"""
Takes a list of items and returns a iterable of lists of items from each process
Takes a list of items and returns an iterable of lists of items from each process
e.g. for root node
[(a_1, a_2),(b_1,b_2),...] = zip_gather([a,b,...])
"""
Expand Down
2 changes: 1 addition & 1 deletion cobaya/output.py
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ def check_and_dump_info(self, input_info, updated_info, check_compatible=True,
"%s:%s, but you are trying to resume a "
"run that used a newer version: %r.",
new_version, k, c, old_version)
# If resuming, we don't want to to *partial* dumps
# If resuming, we don't want to do *partial* dumps
if ignore_blocks and self.is_resuming():
return
# Work on a copy of the input info, since we are updating the prefix
Expand Down
2 changes: 1 addition & 1 deletion cobaya/parameterization.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def is_derived_param(info_param: ParamInput) -> bool:

def expand_info_param(info_param: ParamInput, default_derived=True) -> ParamDict:
"""
Expands the info of a parameter, from the user friendly, shorter format
Expands the info of a parameter, from the user-friendly, shorter format
to a more unambiguous one.
"""
info_param = deepcopy_where_possible(info_param)
Expand Down
10 changes: 5 additions & 5 deletions cobaya/prior.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@
as they are understood for each particular pdf in :class:`scipy.stats`; e.g. for a
``uniform`` pdf, ``loc`` is the lower bound and ``scale`` is the length of the domain,
whereas in a gaussian (``norm``) ``loc`` is the mean and ``scale`` is the standard
deviation).
deviation.
+ Additional specific parameters of the distribution, e.g. ``a`` and ``b`` as the powers
of a Beta pdf.
Expand Down Expand Up @@ -112,8 +112,8 @@
custom priors "`external` priors".
Inside the ``prior`` block, list a pair of priors as ``[name]: [function]``, where the
functions must return **log**-priors. This priors will be multiplied by the
one-dimensional ones defined above. Even if you define an prior for some parameters
functions must return **log**-priors. These priors will be multiplied by the
one-dimensional ones defined above. Even if you define a prior for some parameters
in the ``prior`` block, you still have to specify their bounds in the ``params`` block.
A prior function can be specified in two different ways:
Expand Down Expand Up @@ -144,7 +144,7 @@
External priors can only be functions **sampled** and **fixed**
and **derived** parameters that are dynamically defined in terms of other inputs.
Derived parameters computed by the theory code cannot be used in in a prior, since
Derived parameters computed by the theory code cannot be used in a prior, since
otherwise the full prior could not be computed **before** the likelihood,
preventing us from avoiding computing the likelihood when the prior is null, or
forcing a *post-call* to the prior.
Expand Down Expand Up @@ -183,7 +183,7 @@
Defining parameters dynamically
-------------------------------
We may want to sample in a parameter space different than the one understood by the
We may want to sample in a parameter space different from the one understood by the
likelihood, e.g. because we expect the posterior to be simpler on the alternative
parameters.
Expand Down
15 changes: 14 additions & 1 deletion cobaya/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ def run(info_or_yaml_or_file: Union[InputDict, str, os.PathLike],
debug: Union[bool, int, None] = None,
stop_at_error: Optional[bool] = None,
resume: bool = False, force: bool = False,
minimize: Optional[bool] = None,
no_mpi: bool = False, test: bool = False,
override: Optional[InputDict] = None,
) -> Union[InfoSamplerTuple, PostTuple]:
Expand All @@ -50,6 +51,7 @@ def run(info_or_yaml_or_file: Union[InputDict, str, os.PathLike],
:param stop_at_error: stop if an error is raised
:param resume: continue an existing run
:param force: overwrite existing output if it exists
:param minimize: if true, ignores the sampler and runs default minimizer
:param no_mpi: run without MPI
:param test: only test initialization rather than actually running
:param override: option dictionary to merge into the input one, overriding settings
Expand All @@ -59,7 +61,7 @@ def run(info_or_yaml_or_file: Union[InputDict, str, os.PathLike],
"""

# This function reproduces the model-->output-->sampler pipeline one would follow
# when instantiating by hand, but alters the order to performs checks and dump info
# when instantiating by hand, but alters the order to perform checks and dump info
# as early as possible, e.g. to check if resuming possible or `force` needed.
if no_mpi or test:
mpi.set_mpi_disabled()
Expand All @@ -77,6 +79,9 @@ def run(info_or_yaml_or_file: Union[InputDict, str, os.PathLike],
info["resume"] = bool(resume)
info["force"] = bool(force)
if info.get("post"):
if minimize:
raise ValueError(
"``minimize`` option is incompatible with post-processing.")
if isinstance(output, str) or output is False:
info["post"]["output"] = output or None
return post(info)
Expand All @@ -94,6 +99,11 @@ def run(info_or_yaml_or_file: Union[InputDict, str, os.PathLike],
# GetDist needs to know the original sampler, so don't overwrite if minimizer
try:
which_sampler = list(info["sampler"])[0]
if minimize:
# Preserve options if "minimize" was already the sampler
if which_sampler.lower() != "minimize":
info["sampler"] = {"minimize": None}
which_sampler = "minimize"
except (KeyError, TypeError):
raise LoggedError(
logger_run, "You need to specify a sampler using the 'sampler' key "
Expand Down Expand Up @@ -192,6 +202,9 @@ def run_script(args=None):
"(use with care!)")
parser.add_argument("--%s" % "test", action="store_true",
help="Initialize model and sampler, and exit.")
parser.add_argument("--minimize", action="store_true",
help=("Replaces the sampler in the input and runs a minimization "
"process (incompatible with post-processing)."))
parser.add_argument("--version", action="version", version=get_version())
parser.add_argument("--no-mpi", action='store_true',
help="disable MPI when mpi4py installed but MPI does "
Expand Down
2 changes: 1 addition & 1 deletion cobaya/samplers/mcmc/proposal.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ def set_covariance(self, propose_matrix):
"""
Take covariance of sampled parameters (propose_matrix), and construct orthonormal
parameters where orthonormal parameters are grouped in blocks by speed, so changes
in slowest block changes slow and fast parameters, but changes in the fastest
in the slowest block changes slow and fast parameters, but changes in the fastest
block only changes fast parameters
:param propose_matrix: covariance matrix for the sampled parameters.
Expand Down
50 changes: 31 additions & 19 deletions cobaya/samplers/minimize/minimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,10 @@
This is a **maximizer** for posteriors or likelihoods, based on
`scipy.optimize.Minimize <https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html>`_
and `Py-BOBYQA <https://numericalalgorithmsgroup.github.io/pybobyqa/build/html/index.html>`_
(added in 2.0).
and `Py-BOBYQA <https://numericalalgorithmsgroup.github.io/pybobyqa/build/html/index.html>`_.
.. note::
BOBYQA tends to work better on Cosmological problems with the default settings.
The default is BOBYQA, which tends to work better on Cosmological problems with default
settings.
.. |br| raw:: html
Expand All @@ -36,24 +34,30 @@
**If you use scipy**, you can find `the appropriate references here
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html>`_.
It works more effectively when run on top of a Monte Carlo sample: just change the sampler
for ``Minimize`` with the desired options, and it will use as a starting point the
*maximum a posteriori* (MAP) or best fit (maximum likelihood, or minimal :math:`\chi^2`)
found so far, as well as the covariance matrix of the sample for rescaling of the
parameter jumps.
It works more effectively when run on top of a Monte Carlo sample: it will use the maximum
a posteriori as a starting point (or the best fit, depending on whether the prior is
ignored, :ref:`see below <minimize_like>`), and the recovered covariance matrix of the
posterior to rescale the variables.
To take advantage of a previous run with a Monte Carlo sampler, either:
- change the ``sampler`` to ``minimize`` in the input file,
- or, if running from the shell, repeat the ``cobaya-run`` command used for the original
run, adding the ``--minimize`` flag.
As text output, it produces two different files:
When called from a Python script, Cobaya's ``run`` function returns the updated info
and the products described below in the method
:func:`samplers.minimize.Minimize.products` (see below).
If text output is requested, it produces two different files:
- ``[output prefix].minimum.txt``, in
:ref:`the same format as Cobaya samples <output_format>`,
but containing a single line.
- ``[output prefix].minimum``, the equivalent **GetDist-formatted** file.
If ``ignore_prior: True``, those files are named ``.bestfit[.txt]`` instead of ``minimum``,
and contain the best-fit (maximum of the likelihood) instead of the MAP
(maximum of the posterior).
.. warning::
For historical reasons, in the first two lines of the GetDist-formatted output file
Expand All @@ -62,10 +66,6 @@
:math:`-2` times the sum of the individual :math:`\chi^2` (``chi2__``, with double
underscore) in the table that follows these first lines.
When called from a Python script, Cobaya's ``run`` function returns the updated info
and the products described below in the method
:func:`products <samplers.Minimize.Minimize.products>`.
It is recommended to run a couple of parallel MPI processes:
it will finally pick the best among the results.
Expand All @@ -77,6 +77,18 @@
want to refine the convergence parameters (see ``override`` options in the ``yaml``
below).
.. _minimize_like:
Maximizing the likelihood instead of the posterior
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To maximize the likelihood, add ``ignore_prior: True`` in the ``minimize`` input block.
When producing text output, the generated files are named ``.bestfit[.txt]`` instead of
``minimum``, and contain the best-fit (maximum of the likelihood) instead of the MAP
(maximum of the posterior).
"""

# Global
Expand Down
2 changes: 2 additions & 0 deletions cobaya/samplers/polychord/polychord.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ class polychord(Sampler):
oversample_power: float
nlive: NumberWithUnits
path: str
logzero: float
max_ndead: int

def initialize(self):
"""Imports the PolyChord sampler and prepares its arguments."""
Expand Down
Loading

0 comments on commit d26e833

Please sign in to comment.