diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index 7ffaaf4c..a7399590 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -7,46 +7,28 @@ Replace this text with a description of the pull request including the issue num
This is an itemised checklist for the QA process within UKHSA and represents the bare minimum a QA should be.
-Full instructions on reviewing work can be found at Confluence on the [ONS QA of code guidance](https://best-practice-and-impact.github.io/qa-of-code-guidance/intro.html) page.
-
**To the reviewer:** Check the boxes once you have completed the checks below.
-- [ ] It runs
- - Can you get the code run to completion in a new instance and from top to bottom in the case of notebooks, or in a new R session?
- - Can original analysis results be accurately & easily reproduced from the code?
-- [] tests pass
-- [] CI is successful
+- [ ] CI is successful
+ - Did the test suite run sucessfully?
+
This is a basic form of Smoke Testing
- [ ] Data and security
- - Use nbstripout to prevent Jupyter notebook output being committed to git repositories
- - Files containing individual user's secret files and config files are not in repo, however examples of these files and setup instructions are included in the repo.
- - Secrets include s3 bucket names, login credentials, and organisation information. These can be handled using secrets.yml
- - If you are unsure whether an item should be secret please discuss with repo owner
- - The changes do not include unreleased policy or official information.
+ - Files containing individual user's secret files and config files are not in repo.
+ - No private or identifiable data has been added to the repo.
+
- [ ] Sensible
- - Does the code execute the task accurately? This is a subjective challenge.
+ - Does the code execute the task accurately?
+ - Is the code tidy, commented and parsimonious?
- Does the code do what the comments and readme say it does\*?
- - Is the code robust enough to handle missing or challenging data?
+ - Is the code covered by useful unit tests?
+
- [ ] Documentation
- - The purpose of the code is clearly defined, whether in a markdown chunk at the top of a notebook or in a README
- - Assumptions of the analysis & input data are clearly displayed to the reader, whether in a markdown chunk at the top of a notebook or in a README
+ - The purpose of the code is clearly defined?
+ - If reasonable, has an exaple of the code been given in a notebook in the docs?
- Comments are included in the code so the reader can follow why the code behaves in the way it does
- - Teams with high quality documentation are better able to implement technical practices more readily and perform better as a whole (DORA, 2021).
- - Is the code written in a standard way? (In a hurry this may be a nice to have, but if at all possible, this standard from the beginning can cut long term costs dramatically)
- - Code is modular, storing functions & classes in the src and being imported into a notebook or script
- - Projects should be based on the UKHSA repo template developed to work with cookiecutter
- - Variable, function & module names should be intuitive to the reader
- - For example, intuitive names include df_geo_lookup & non-intuitive names include foobar
- - Common and useful checks for coding we use broadly across UKHSA include:
- - Rstyler
- - lintr
- - black
- - flake8
-- [ ] Pair coding review completed (optional, but highly recommended for QA in a hurry)
- - Pair programming is a way of working and reviewing that can result in the same standard of work being completed 40%-50% faster (Williams et al., 2000, Nosek, 1998) and is better than solo programming for tasks involving efficient knowledge transferring and for working on highly connected systems (Dawande et al., 2008).
- - Have the assignee and reviewer been on a video call or in person together during the code development in a line by line writing and review process?
-
-\* If the comments or readme do not have enough information, this check fails.
+ - Is the code written in a standard way (does it pass linting)?
+ - Variable, function & module names should be intuitive to the reader?
## How to QA this PR
diff --git a/.github/workflows/book.yml b/.github/workflows/book.yml
index 7e9a3eca..2440f8c7 100644
--- a/.github/workflows/book.yml
+++ b/.github/workflows/book.yml
@@ -30,23 +30,21 @@ jobs:
uses: ts-graphviz/setup-graphviz@v1
# install python
- - name: Set up Python 3.8
+ - name: Set up Python 3.10
uses: actions/setup-python@v4
with:
- python-version: 3.8
+ python-version: '3.10'
- # install dependencies
- - name: Install dependencies
+ # set up pygom
+ - name: Build and install pygom
run: |
python -m pip install --upgrade pip
- pip install -r requirements.txt
- pip install -r docs/requirements.txt
+ pip install .
- # set up pygom
- - name: Build and install pygom
+ # install dependencies
+ - name: Install documentation dependencies
run: |
- python setup.py build
- python setup.py install
+ pip install -r docs/requirements.txt
# build the book
# TODO check which flags are needed, -W
@@ -56,8 +54,16 @@ jobs:
# deploy book to github-pages
- name: GitHub Pages
- uses: peaceiris/actions-gh-pages@v3.6.1
+ uses: peaceiris/actions-gh-pages@v4
+ if: github.ref == 'refs/heads/main'
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/_build/html
-
+ # deploy book to github-pages dev
+ - name: GitHub Pages dev
+ uses: peaceiris/actions-gh-pages@v4
+ if: github.ref == 'refs/heads/dev'
+ with:
+ github_token: ${{ secrets.GITHUB_TOKEN }}
+ publish_dir: docs/_build/html
+ destination_dir: dev
diff --git a/.github/workflows/distribute_package.yml b/.github/workflows/distribute_package.yml
new file mode 100644
index 00000000..7fef265c
--- /dev/null
+++ b/.github/workflows/distribute_package.yml
@@ -0,0 +1,156 @@
+name: create PyGOM distributions
+
+on:
+ push:
+ branches:
+ - master
+ - dev
+ - feature/*
+ - bugfix/*
+
+ pull_request:
+ branches:
+ - master
+ - dev
+
+jobs:
+ build_wheels:
+ name: Build wheels on ${{ matrix.platform_id }} for Python v${{ matrix.python[1] }}
+ runs-on: ${{ matrix.os }}
+ strategy:
+ # Ensure that a wheel builder finishes even if another fails
+ fail-fast: false
+ matrix:
+ include:
+ # Window 64 bit
+ - os: windows-latest
+ python: [cp39, "3.9"]
+ platform_id: win_amd64
+ - os: windows-latest
+ python: [cp310, "3.10"]
+ platform_id: win_amd64
+ - os: windows-latest
+ python: [cp311, "3.11"]
+ platform_id: win_amd64
+ - os: windows-latest
+ python: [cp312, "3.12"]
+ platform_id: win_amd64
+
+ # Python 3.9 in the manylinux build environment requires our dependencies to be
+ # built from source so we won't supply a wheel for 3.9 (source build will prevent lib
+ # version conflicts).
+
+ # NumPy on Python 3.10 only supports 64bit and is only available with manylinux2014
+ - os: ubuntu-latest
+ python: [cp310, "3.10"]
+ platform_id: manylinux_x86_64
+ manylinux_image: manylinux2014
+ - os: ubuntu-latest
+ python: [cp311, "3.11"]
+ platform_id: manylinux_x86_64
+ manylinux_image: manylinux2014
+ - os: ubuntu-latest
+ python: [cp312, "3.12"]
+ platform_id: manylinux_x86_64
+ manylinux_image: manylinux2014
+
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ # We need quite a deep fetch so that we get the versioning right
+ fetch-depth: 500
+ fetch-tags: true
+
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+
+ # Used to host cibuildwheel
+ - uses: actions/setup-python@v5
+ with:
+ python-version: ${{ matrix.python[1] }}
+
+ - name: Install cibuildwheel
+ run: python -m pip install cibuildwheel>=2.19.2
+
+ # Need to duplicate these two lines as it seems Windows and Linux echo treats qutotes differently
+ - name: Build and test the wheels
+ if: matrix.os == 'ubuntu-latest'
+ run: python -m cibuildwheel --output-dir wheelhouse
+ env:
+ CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.platform_id }}*
+ CIBW_TEST_COMMAND: python -W default -m unittest discover --start-directory {project}/tests
+ # setuptools_scm workaround for https://github.com/pypa/setuptools_scm/issues/455
+ CIBW_BEFORE_BUILD: ${{ github.ref == 'refs/heads/dev' && 'echo ''local_scheme = "no-local-version"'' >> pyproject.toml && git diff --color=always && git update-index --assume-unchanged pyproject.toml' || '' }}
+ - name: Build and test the wheels
+ if: matrix.os == 'windows-latest'
+ run: python -m cibuildwheel --output-dir wheelhouse
+ env:
+ CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.platform_id }}*
+ CIBW_TEST_COMMAND: python -W default -m unittest discover --start-directory {project}/tests
+ # setuptools_scm workaround for https://github.com/pypa/setuptools_scm/issues/455
+ CIBW_BEFORE_BUILD: ${{ github.ref == 'refs/heads/dev' && 'echo local_scheme = "no-local-version" >> pyproject.toml && git diff --color=always && git update-index --assume-unchanged pyproject.toml' || '' }}
+
+ # Upload the results
+ - uses: actions/upload-artifact@v4
+ with:
+ name: cibw-wheels-${{ matrix.platform_id }}-${{ matrix.python[0] }}
+ path: ./wheelhouse/*.whl
+
+ build_sdist:
+ name: Build source distribution
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ # We need quite a deep fetch so that we get the versioning right
+ fetch-depth: 500
+ fetch-tags: true
+
+ # setuptools_scm workaround for https://github.com/pypa/setuptools_scm/issues/455
+ - name: Disable local version identifier on develop CI
+ if: github.ref == 'refs/heads/dev'
+ run: |
+ echo 'local_scheme = "no-local-version"' >> pyproject.toml
+ git diff --color=always
+ git update-index --assume-unchanged pyproject.toml
+
+ - name: Build sdist
+ run: pipx run build --sdist
+
+ - uses: actions/upload-artifact@v4
+ with:
+ name: cibw-sdist
+ path: dist/*.tar.gz
+
+ upload_pypi:
+ name: Upload release to PyPI
+ needs: [build_wheels, build_sdist]
+ runs-on: ubuntu-latest
+ environment:
+ name: pypi
+ permissions:
+ id-token: write
+ steps:
+ - uses: actions/download-artifact@v4
+ with:
+ # unpacks all CIBW artifacts into dist/
+ pattern: cibw-*
+ path: dist
+ merge-multiple: true
+
+ # This is the live push to PyPI on tagging the master branch
+ - uses: pypa/gh-action-pypi-publish@release/v1
+ # Upload to PyPI on every tag starting with 'v'
+ if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
+ with:
+ repository-url: https://pypi.org/p/pygom
+
+ # Upload to PyPI *testing* every dev branch commit
+ - uses: pypa/gh-action-pypi-publish@release/v1
+ # Upload to testing only if we are on the dev branch
+ if: github.ref == 'refs/heads/dev'
+ with:
+ # Testing only at this point
+ repository-url: https://test.pypi.org/legacy/
\ No newline at end of file
diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
deleted file mode 100644
index 420d7ffd..00000000
--- a/.github/workflows/main.yml
+++ /dev/null
@@ -1,91 +0,0 @@
-name: pygom
-
-on:
- push:
- branches:
- - master
- - dev
- - feature/*
- - bugfix/*
-
- pull_request:
- branches:
- - master
- - dev
-
-env:
- ACTIONS_ALLOW_UNSECURE_COMMANDS: true
-
-jobs:
- build:
- runs-on: ${{ matrix.os }}
- strategy:
- matrix:
- os: [ubuntu-latest, windows-latest] # macos-latest
- python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
-
- steps:
- - uses: actions/checkout@v2
-
- - uses: actions/cache@v1
- if: startsWith(runner.os, 'Linux')
- with:
- path: ~/.cache/pip
- key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
- restore-keys: |
- ${{ runner.os }}-pip-
- - uses: actions/cache@v1
- if: startsWith(runner.os, 'macOS')
- with:
- path: ~/Library/Caches/pip
- key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
- restore-keys: |
- ${{ runner.os }}-pip-
- - uses: actions/cache@v1
- if: startsWith(runner.os, 'Windows')
- with:
- path: ~\AppData\Local\pip\Cache
- key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
- restore-keys: |
- ${{ runner.os }}-pip-
- - name: Set up Python
- uses: actions/setup-python@v1
- with:
- python-version: ${{ matrix.python-version }}
-
- - name: Check python version
- run: python -c "import sys; print(sys.version)"
-
- - name: RC.exe for Windows
- if: startsWith(runner.os, 'Windows')
- run: |
- function Invoke-VSDevEnvironment {
- $vswhere = "${env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\vswhere.exe"
- $installationPath = & $vswhere -prerelease -legacy -latest -property installationPath
- $Command = Join-Path $installationPath "Common7\Tools\vsdevcmd.bat"
- & "${env:COMSPEC}" /s /c "`"$Command`" -no_logo && set" | Foreach-Object {
- if ($_ -match '^([^=]+)=(.*)') {
- [System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
- }
- }
- }
- Invoke-VSDevEnvironment
- Get-Command rc.exe | Format-Table -AutoSize
- echo "::add-path::$(Get-Command rc.exe | Split-Path)"
-
- - name: Install dependencies
- run: |
- python -m pip install --upgrade pip
- pip install -r requirements.txt
-
- - name: Build and install package for c files
- run: |
- python setup.py build
- python setup.py install
- - name: Fix matplotlib backend for MacOS
- if: startsWith(runner.os, 'macOS')
- run: |
- mkdir ~/.matplotlib
- echo "backend: TkAgg" >> ~/.matplotlib/matplotlibrc
- - name: Test and coverage
- run: python setup.py test
diff --git a/.github/workflows/test_package.yml b/.github/workflows/test_package.yml
new file mode 100644
index 00000000..500f8913
--- /dev/null
+++ b/.github/workflows/test_package.yml
@@ -0,0 +1,71 @@
+name: create PyGOM distributions
+
+on:
+ push:
+ branches:
+ - master
+ - dev
+ - feature/*
+ - bugfix/*
+
+ pull_request:
+ branches:
+ - master
+ - dev
+
+jobs:
+ test:
+ runs-on: ${{ matrix.os }}
+ strategy:
+ matrix:
+ os: [ubuntu-latest, windows-latest] #, macos-13, macos-14]
+ python-version: ["3.9", "3.10", "3.11"]
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - uses: actions/cache@v4
+ if: startsWith(runner.os, 'Linux')
+ with:
+ path: ~/.cache/pip
+ key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
+ restore-keys: |
+ ${{ runner.os }}-pip-
+ - uses: actions/cache@v4
+ if: startsWith(runner.os, 'macOS')
+ with:
+ path: ~/Library/Caches/pip
+ key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
+ restore-keys: |
+ ${{ runner.os }}-pip-
+ - uses: actions/cache@v4
+ if: startsWith(runner.os, 'Windows')
+ with:
+ path: ~\AppData\Local\pip\Cache
+ key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
+ restore-keys: |
+ ${{ runner.os }}-pip-
+ - name: Set up Python
+ uses: actions/setup-python@v5
+ with:
+ python-version: ${{ matrix.python-version }}
+
+ - name: Check python version
+ run: python -c "import sys; print(sys.version)"
+
+ - name: Install pip
+ run: |
+ python -m pip install --upgrade pip
+
+ - name: Install coverage
+ run: |
+ python -m pip install codecov
+
+ - name: install PyGOM
+ run: |
+ pip install .
+
+ - name: Run tests
+ run: coverage run -m unittest discover --verbose --start-directory tests
+
+
\ No newline at end of file
diff --git a/MANIFEST.in b/MANIFEST.in
deleted file mode 100644
index 80f66d3c..00000000
--- a/MANIFEST.in
+++ /dev/null
@@ -1 +0,0 @@
-include data/*.json
diff --git a/README.md b/README.md
new file mode 100644
index 00000000..7ac5ee83
--- /dev/null
+++ b/README.md
@@ -0,0 +1,120 @@
+# PyGOM - Python Generic ODE Model
+
+[![pypi version](https://img.shields.io/pypi/v/pygom.svg)](https://pypi.python.org/pypi/pygom)
+[![licence](https://img.shields.io/pypi/l/pygom?color=green)](https://raw.githubusercontent.com/ukhsa-collaboration/pygom/master/LICENSE.txt)
+[![Github actions](https://github.com/ukhsa-collaboration/pygom/workflows/pygom/badge.svg)](https://github.com/ukhsa-collaboration/pygom/actions/)
+[![Jupyter Book Badge](https://jupyterbook.org/badge.svg)](http://ukhsa-collaboration.github.io/pygom/md/intro.html)
+
+A generic framework for Ordinary Differential Equation (ODE) models, especially compartmental type systems.
+This package provides a simple interface for users to construct ODE models backed by a comprehensive and easy to use tool–box implementing functions to easily perform common operations such as parameter estimation and solving for deterministic or stochastic time evolution.
+With both the algebraic and numeric calculations performed automatically (but still accessible),
+the end user is free to focus on model development.
+Full documentation for this package is avalible on the [documentation](http://ukhsa-collaboration.github.io/pygom/md/intro.html) page.
+
+## Installation
+The easiest way to install a copy of PyGOM is via PyPI and pip
+
+ pip install pygom
+
+Alternatively, you can download a local copy of the PyGOM source files from this GitHub repository:
+
+ git clone https://github.com/ukhsa-collaboration/pygom.git
+
+Please be aware that there may be redundant files within the package as it is under active development.
+
+> [!NOTE]
+> The latest fully reviewed version of PyGOM will be on the `master` branch and we generally recommend
+> that users install this version. However, the latest version being prepared for release is hosted on
+> the `dev` branch.
+
+When running the following command line commands, ensure that your current working directory is the one
+where the PyGOM source files were downloaded to. This should be found from your home directory:
+
+ cd pygom
+
+Activate the relevant branch for installation via Git Bash. for example if you want
+new release then this is the `dev` branch:
+
+ git checkout dev
+
+Package dependencies can be found in the file, `requirements.txt`.
+An easy way to install these to create a new [conda](https://conda.io/docs) environment in Anaconda Prompt via:
+
+ conda env create -f conda-env.yml
+
+which you should ensure is active for the installation process using
+
+ conda activate pygom
+
+Alternatively, you may add dependencies to your own environment through conda:
+
+ conda install --file requirements.txt
+
+**or** via pip:
+
+ pip install -r requirements.txt
+
+The final prerequisites, if you are working on a Windows machine, is that you will also need to install:
+- [Graphviz](https://graphviz.org/)
+- Microsoft Visual C++ 14.0 or greater, which you can get with [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
+
+You should now be able to install the PyGOM package via command line:
+
+ pip install .
+
+and test that installation has completed successfully
+
+ python -m unittest discover --verbose --start-directory tests
+
+This will run a few test cases and can take some minutes to complete.
+
+## Documentation
+
+Documentation must be built locally and all necessary files can be found in the `docs` folder.
+Documentation is built from the command line by first installing the additional documentation requirements:
+
+ pip install -r docs/requirements.txt
+
+and then building the documentation:
+
+ jupyter-book build docs
+
+The html files will be saved in the local copy of your repository under:
+
+ docs/_build/html
+
+You can view the documentation by opening the index file in your browser of choice:
+
+ docs/_build/html/index.html
+
+> [!NOTE]
+> Building the documentation involves running many examples in python which can take up to 30 minutes. Subsequent builds with these examples unchanged are much quicker due to caching of the code outputs.
+
+Please be aware that if the module tests fails, then the documentation for the package will not compile.
+
+## Contributors
+
+Thomas Finnie (Thomas.Finnie@ukhsa.gov.uk)
+
+Edwin Tye
+
+Hannah Williams
+
+Jonty Carruthers
+
+Martin Grunnill
+
+Joseph Gibson
+
+## Version
+0.1.8 Updated and much better documentation.
+
+0.1.7 Add Approximate Bayesian Computation (ABC) as a method of fitting to data
+
+0.1.6 Bugfix scipy API, pickling, print to logging and simulation
+
+0.1.5 Remove auto-simplification for much faster startup
+
+0.1.4 Much faster Tau leap for stochastic simulations
+
+0.1.3 Defaults to python built-in unittest and more in sync with conda
diff --git a/README.rst b/README.rst
deleted file mode 100644
index b78b4bd3..00000000
--- a/README.rst
+++ /dev/null
@@ -1,87 +0,0 @@
-===============================
-pygom - ODE modelling in Python
-===============================
-
-|Build status| |Github actions| |Documentation Status| |pypi version| |licence| |Jupyter Book Badge|
-
-.. |pypi version| image:: https://img.shields.io/pypi/v/pygom.svg
- :target: https://pypi.python.org/pypi/pygom
-.. |Documentation Status| image:: https://readthedocs.org/projects/pygom/badge/?version=master
- :target: https://pygom.readthedocs.io/en/master/?badge=master
-.. |licence| image:: https://img.shields.io/pypi/l/pygom?color=green :alt: PyPI - License
- :target: https://raw.githubusercontent.com/PublicHealthEngland/pygom/master/LICENSE.txt
-.. |Github actions| image:: https://github.com/PublicHealthEngland/pygom/workflows/pygom/badge.svg
- :target: https://github.com/PublicHealthEngland/pygom/actions/
-.. |Jupyter Book Badge| image:: https://jupyterbook.org/badge.svg
- :target: https://hwilliams-phe.github.io/pygom/intro.html
-
-A generic framework for ode models, specifically compartmental type problems.
-
-This package depends on::
-
- dask
- matplotlib
- enum34
- pandas
- python-dateutil
- numpy
- scipy
- sympy
-
-and they should be installed if not already available. Alternatively, the easier way
-to use a minimal (and isolated) setup is to use `conda `_ and
-create a new environment via::
-
- conda env create -f conda-env.yml
-
-Installation of this package can be performed via::
-
-$ python setup.py install
-
-and tested via::
-
-$ python setup.py test
-
-A reduced form of the documentation may be found on ReadTheDocs_.
-
-.. _ReadTheDocs: https://pygom.readthedocs.io/en/master/
-
-You may get the full documentation, including the lengthy examples by locally
-building the documentation found in the folder::
-
-$ doc
-
-Note that building the documentation can be extremely slow depending on the
-setup of the system. Further details can be found at it's own read me::
-
-$ doc/README.rst
-
-Please be aware that if the module tests fails, then the documentation for the
-package will not compile.
-
-Please be aware that there may be redundant files within the package as it is
-under active development.
-
-Contributors
-============
-Thomas Finnie (Thomas.Finnie@phe.gov.uk)
-
-Edwin Tye
-
-Hannah Williams
-
-Jonty Carruthers
-
-Martin Grunnill
-
-Version
-=======
-0.1.7 Add Approximate Bayesian Computation (ABC) as a method of fitting to data
-
-0.1.6 Bugfix scipy API, pickling, print to logging and simulation
-
-0.1.5 Remove auto-simplification for much faster startup
-
-0.1.4 Much faster Tau leap for stochastic simulations
-
-0.1.3 Defaults to python built-in unittest and more in sync with conda
diff --git a/conda-env.yml b/conda-env.yml
index 8c4d3582..359ab31a 100644
--- a/conda-env.yml
+++ b/conda-env.yml
@@ -11,5 +11,5 @@ dependencies:
- graphviz
- cython
- pip
- -pip:
+ - pip:
- -r requirements.txt
diff --git a/doc/Makefile b/doc/Makefile
deleted file mode 100644
index 41dbaa98..00000000
--- a/doc/Makefile
+++ /dev/null
@@ -1,89 +0,0 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-PAPER =
-BUILDDIR = _build
-
-# Internal variables.
-PAPEROPT_a4 = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
-
-.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
-
-help:
- @echo "Please use \`make ' where is one of"
- @echo " html to make standalone HTML files"
- @echo " dirhtml to make HTML files named index.html in directories"
- @echo " pickle to make pickle files"
- @echo " json to make JSON files"
- @echo " htmlhelp to make HTML files and a HTML help project"
- @echo " qthelp to make HTML files and a qthelp project"
- @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
- @echo " changes to make an overview of all changed/added/deprecated items"
- @echo " linkcheck to check all external links for integrity"
- @echo " doctest to run all doctests embedded in the documentation (if enabled)"
-
-clean:
- -rm -rf $(BUILDDIR)/*
-
-html:
- $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-dirhtml:
- $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-pickle:
- $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
- @echo
- @echo "Build finished; now you can process the pickle files."
-
-json:
- $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
- @echo
- @echo "Build finished; now you can process the JSON files."
-
-htmlhelp:
- $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
- @echo
- @echo "Build finished; now you can run HTML Help Workshop with the" \
- ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-qthelp:
- $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
- @echo
- @echo "Build finished; now you can run "qcollectiongenerator" with the" \
- ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
- @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pyGenericOdeModelDoc.qhcp"
- @echo "To view the help file:"
- @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pyGenericOdeModelDoc.qhc"
-
-latex:
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
- @echo
- @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
- @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
- "run these through (pdf)latex."
-
-changes:
- $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
- @echo
- @echo "The overview file is in $(BUILDDIR)/changes."
-
-linkcheck:
- $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
- @echo
- @echo "Link check complete; look for any errors in the above output " \
- "or in $(BUILDDIR)/linkcheck/output.txt."
-
-doctest:
- $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
- @echo "Testing of doctests in the sources finished, look at the " \
- "results in $(BUILDDIR)/doctest/output.txt."
diff --git a/doc/README.rst b/doc/README.rst
deleted file mode 100644
index a5143c74..00000000
--- a/doc/README.rst
+++ /dev/null
@@ -1,36 +0,0 @@
-=======================
-Documentation for pygom
-=======================
-
-This documentation is written in `sphinx` style. The starting point (about the
-package) can be found at::
-
- $ doc/_build/html/index.html
-
-which is the main page in html format.
-
-If you wish to build the documentation. Go to the document directory and type
-the following line in the terminal::
-
- doc$ make html
-
-It is assumed here that the pyGenericOdeModel package has already
-been installed, else, the code demonstration will throw out errors
-as it cannot find the required modules. Most likely, building the
-documentation will require the following packages if not already installed::
-
- numpydoc
- sphinx
- ipython
-
-
-There may be cases that `ipython` require extra packages, install the full
-version using `$ pip install ipython[all]`
-
-============================
-Installation of requirements
-============================
-
-If you are using `conda` the requirements to build the docs can be installed
-by `$ conda install --file requirements.txt` or similarly for those using `pip`
-you may `$pip install -r requirements.txt` from the root of the docs directory.
\ No newline at end of file
diff --git a/doc/doc_to_sort/bvpSimple.rst b/doc/doc_to_sort/bvpSimple.rst
deleted file mode 100644
index e5de0d61..00000000
--- a/doc/doc_to_sort/bvpSimple.rst
+++ /dev/null
@@ -1,187 +0,0 @@
-.. _bvpSimple:
-
-*******************************
-Solving Boundary Value Problems
-*******************************
-
-In addition to finding solutions for an IVP and estimate the unknown parameters, this package also allows you to solve BVP with a little bit of imagination. Here, we are going to show how a BVP can be solved by treating it as a parameter estimation problem. Essentially, a shooting method where the first boundary condition defines the initial condition of an IVP and the second boundary condition is an observation. Two examples, both from MATLAB [1]_, will be shown here.
-
-Simple model 1
-==============
-
-We are trying to find the solution to the second order differential equation
-
-.. math::
- \nabla^{2} y + |y| = 0
-
-subject to the boundary conditions :math:`y(0) = 0` and :math:`y(4) = -2`. Convert this into a set of first order ODE
-
-.. math::
-
- \frac{d y_{0}}{dt} &= y_{1} \\
- \frac{d y_{1}}{dt} &= -|y_{0}|
-
-using an auxiliary variable :math:`y_{1} = \nabla y` and :math:`y_{0} = y`. Setting up the system below
-
-.. ipython::
-
- In [1]: from pygom import Transition, TransitionType, DeterministicOde, SquareLoss
-
- In [1]: import matplotlib.pyplot as plt
-
- In [2]: stateList = ['y0', 'y1']
-
- In [3]: paramList = []
-
- In [4]: ode1 = Transition(origin='y0',
- ...: equation='y1',
- ...: transition_type=TransitionType.ODE)
-
- In [5]: ode2 = Transition(origin='y1',
- ...: equation='-abs(y0)',
- ...: transition_type=TransitionType.ODE)
-
- In [6]: model = DeterministicOde(stateList,
- ...: paramList,
- ...: ode=[ode1, ode2])
-
- In [7]: model.get_ode_eqn()
-
-We check that the equations are correct before proceeding to set up our loss function.
-
-.. ipython::
-
- In [1]: import numpy
-
- In [2]: from scipy.optimize import minimize
-
- In [3]: initialState = [0.0, 1.0]
-
- In [4]: t = numpy.linspace(0, 4, 100)
-
- In [5]: model.initial_values = (initialState, t[0])
-
- In [6]: solution = model.integrate(t[1::])
-
- In [7]: f = plt.figure()
-
- @savefig bvp1_random_guess_plot.png
- In [8]: model.plot()
-
- In [9]: plt.close()
-
-Setting up the second boundary condition :math:`y(4) = -2` is easy, because that
-is just a single observation attached to the state :math:`y_{1}`. Enforcing the
-first boundary condition requires us to set it as the initial condition.
-Because the condition only states that :math:`y(0) = 0`, the starting value of
-the other state :math:`y_1` is free. We let our loss object know that it is
-free through the targetState input argument.
-
-.. ipython::
-
- In [10]: theta = [0.0]
-
- In [11]: obj = SquareLoss(theta=theta,
- ....: ode=model,
- ....: x0=initialState,
- ....: t0=t[0],
- ....: t=t[-1],
- ....: y=[-2],
- ....: state_name=['y0'],
- ....: target_state=['y1'])
-
- In [12]: thetaHat = minimize(fun=obj.costIV, x0=[0.0])
-
- In [13]: print(thetaHat)
-
- In [14]: model.initial_values = ([0.0] + thetaHat['x'].tolist(), t[0])
-
- In [15]: solution = model.integrate(t[1::])
-
- In [16]: f = plt.figure()
-
- @savefig bvp1_solution_plot.png
- In [17]: model.plot()
-
- In [18]: plt.close()
-
-We are going to visualize the solution, and also check the boundary condition. The first became our initial condition, so it is always satisfied and only the latter is of concern, which is zero (subject to numerical error) from thetaHat.
-
-Simple model 2
-==============
-
-Our second example is different as it involves an actual parameter and also time. We have the Mathieu's Equation
-
-.. math::
-
- \nabla^{2} y + \left(p - 2q \cos(2x)\right)y = 0
-
-and the aim is to compute the fourth eigenvalue :math:`q=5`. There are three boundary conditions
-
-.. math::
-
- \nabla y(0) = 0, \quad \nabla y(\pi) = 0, \quad y(0) = 1
-
-and we aim to solve it by converting it to a first order ODE and tackle it as an IVP. As our model object does not allow the use of the time component in the equations, we introduce a anxiliary state :math:`\tau` that replaces time :math:`t`. Rewrite the equations using :math:`y_{0} = y, y_{1} = \nabla y` and define our model as
-
-.. ipython::
-
- In [1]: stateList = ['y0', 'y1', 'tau']
-
- In [2]: paramList = ['p']
-
- In [3]: ode1 = Transition('y0', 'y1', TransitionType.ODE)
-
- In [4]: ode2 = Transition('y1', '-(p - 2*5*cos(2*tau))*y0', TransitionType.ODE)
-
- In [5]: ode3 = Transition('tau', '1', TransitionType.ODE)
-
- In [6]: model = DeterministicOde(stateList, paramList, ode=[ode1, ode2, ode3])
-
- In [7]: theta = [1.0, 1.0, 0.0]
-
- In [8]: p = 15.0
-
- In [9]: t = numpy.linspace(0, numpy.pi)
-
- In [10]: model.parameters = [('p',p)]
-
- In [11]: model.initial_values = (theta, t[0])
-
- In [12]: solution = model.integrate(t[1::])
-
- In [13]: f = plt.figure()
-
- @savefig bvp2_random_guess_plot.png
- In [14]: model.plot()
-
- In [15]: plt.close()
-
-Now we are ready to setup the estimation. Like before, we setup the second boundary condition by pretending that it is an observation. We have all the initial conditions defined by the first boundary condition
-
-.. ipython::
-
- In [1]: obj = SquareLoss(15.0, model, x0=[1.0, 0.0, 0.0], t0=0.0, t=numpy.pi, y=0.0, state_name='y1')
-
- In [2]: xhatObj = minimize(obj.cost,[15])
-
- In [3]: print(xhatObj)
-
- In [4]: model.parameters = [('p', xhatObj['x'][0])]
-
- In [5]: model.initial_values = ([1.0, 0.0, 0.0], t[0])
-
- In [5]: solution = model.integrate(t[1::])
-
- In [6]: f = plt.figure()
-
- @savefig bvp2_solution_plot.png
- In [7]: model.plot()
-
- In [8]: plt.close()
-
-The plot of the solution shows the path that satisfies all boundary condition. The last subplot is time which obvious is redundant here but the :meth:`DeterministicOde.plot` method is not yet able to recognize the time component. Possible speed up can be achieved through the use of derivative information or via root finding method that tackles the gradient directly, instead of the cost function.
-
-**Reference**
-
-.. [1] http://uk.mathworks.com/help/matlab/ref/bvp4c.html
diff --git a/doc/doc_to_sort/common_models.rst b/doc/doc_to_sort/common_models.rst
deleted file mode 100644
index 01dfe8dd..00000000
--- a/doc/doc_to_sort/common_models.rst
+++ /dev/null
@@ -1,26 +0,0 @@
-.. _common_models:
-
-*********************************
-Pre-defined Example common_models
-*********************************
-
-We have defined a set of models :mod:`common_models`, most of them commonly used in epidemiology. They are there as examples and also save time for end users. Most of them are of the compartmental type, and we use standard naming conventions i.e. **S** = Susceptible, **E** = Exposed, **I** = Infectious, **R** = Recovered. Extra state symbol will be introduced when required.
-
-.. toctree::
-
- common_models/SIS.rst
- common_models/SIS_Periodic.rst
- common_models/SIR.rst
- common_models/SIR_Birth_Death.rst
- common_models/SEIR.rst
- common_models/SEIR_Multiple.rst
- common_models/SEIR_Birth_Death.rst
- common_models/SEIR_Birth_Death_Periodic.rst
- common_models/Legrand_Ebola_SEIHFR.rst
- common_models/Lotka_Volterra.rst
- common_models/Lotka_Volterra_4State.rst
- common_models/FitzHugh.rst
- common_models/Lorenz.rst
- common_models/vanDelPol.rst
- common_models/Robertson.rst
-
diff --git a/doc/doc_to_sort/common_models/FitzHugh.rst b/doc/doc_to_sort/common_models/FitzHugh.rst
deleted file mode 100644
index 517a8a55..00000000
--- a/doc/doc_to_sort/common_models/FitzHugh.rst
+++ /dev/null
@@ -1,43 +0,0 @@
-:func:`.FitzHugh`
------------------
-
-The FitzHugh model [FitzHugh1961]_ without external external stimulus. This is a commonly used model when developing new methodology with regard to ode's, see [Ramsay2007]_ and [Girolami2011]_ and reference therein.
-
-.. math::
-
- \frac{dV}{dt} &= c ( V - \frac{V^{3}}{3} + R) \\
- \frac{dR}{dt} &= -\frac{1}{c}(V - a + bR).
-
-An example would be
-
-.. ipython::
-
- In [1]: import numpy
-
- In [1]: from pygom import common_models
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: ode = common_models.FitzHugh({'a':0.2, 'b':0.2, 'c':3.0})
-
- In [1]: t = numpy.linspace(0, 20, 101)
-
- In [1]: x0 = [1.0, -1.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_fh_1.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
-
- In [1]: fig = plt.figure()
-
- In [1]: plt.plot(solution[:,0], solution[:,1], 'b')
-
- @savefig common_models_fh_2.png
- In [1]: plt.show()
-
- In [1]: plt.close()
\ No newline at end of file
diff --git a/doc/doc_to_sort/common_models/Legrand_Ebola_SEIHFR.rst b/doc/doc_to_sort/common_models/Legrand_Ebola_SEIHFR.rst
deleted file mode 100644
index 0b0ab4d9..00000000
--- a/doc/doc_to_sort/common_models/Legrand_Ebola_SEIHFR.rst
+++ /dev/null
@@ -1,93 +0,0 @@
-:func:`.Legrand_Ebola_SEIHFR`
-=============================
-
-A commonly used model in the literature to model Ebola outbreaks is the SEIHFR model proposed by [Legrand2007]_. There are two extra compartments on top of the standard SEIR, :math:`H` for hospitialization and :math:`F` for funeral. A total of ten parameters (with some describing the inverse) are required for the model, they are:
-
-================== ============================================
- Symbol Process
-================== ============================================
-:math:`\beta_{I}` Transmission rate in community
-:math:`\beta_{H}` Transmission rate in hospital
-:math:`\beta_{F}` Transmission rate in funeral
-:math:`\gamma_{I}` (inverse) Onset to end of infectious
-:math:`\gamma_{D}` (inverse) Onset to death
-:math:`\gamma_{H}` (inverse) Onset of hospitilization
-:math:`\gamma_{F}` (inverse) Death to burial
-:math:`\alpha` (inverse) Duration of the incubation period
-:math:`\theta` Proportional of cases hospitalized
-:math:`\delta` Case--ftality ratio
-================== ============================================
-
-The **(inverse)** denotes the parameter should be inverted to make epidemiological sense. We use the parameters in their more natural from in :func:`.Legrand_Ebola_SEIHFR` and replace all the :math:`\gamma`'s with :math:`\omega`'s, i.e. :math:`\omega_{i} = \gamma_{i}^{-1}` for :math:`i \in \{I,D,H,F\}`. We also used :math:`\alpha^{-1}` in our model instead of :math:`\alpha` so that reading the parameters directly gives a more intuitive meaning. There arw five additional parameters that is derived. The two derived case fatality ratio as
-
-.. math::
-
- \delta_{1} &= \frac{\delta \gamma_{I}}{\delta \gamma_{I} + (1-\delta)\gamma_{D}} \\
- \delta_{2} &= \frac{\delta \gamma_{IH}}{\delta \gamma_{IH} + (1-\delta)\gamma_{DH}},
-
-with an adjusted hospitalization parameter
-
-.. math::
-
- \theta_{1} = \frac{\theta(\gamma_{I}(1-\delta_{1}) + \gamma_{D}\delta_{1})}{\theta(\gamma_{I}(1-\delta_{1}) + \gamma_{D}\delta_{1}) + (1-\theta)\gamma_{H}},
-
-and the derived infectious period
-
-.. math::
-
- \gamma_{IH} &= (\gamma_{I}^{-1} - \gamma_{H}^{-1})^{-1} \\
- \gamma_{DH} &= (\gamma_{D}^{-1} - \gamma_{H}^{-1})^{-1}.
-
-Now we are ready to state the full set of ode's,
-
-.. math::
-
- \frac{dS}{dt} &= -N^{-1} (\beta_{I}SI + \beta_{H}SH + \beta_{F}(t) SF) \\
- \frac{dE}{dt} &= N^{-1} (\beta_{I}SI + \beta_{H}SH + \beta_{F}(t) SF) - \alpha E \\
- \frac{dI}{dt} &= \alpha E - (\gamma_{H} \theta_{1} + \gamma_{I}(1-\theta_{1})(1-\delta_{1}) + \gamma_{D}(1-\theta_{1})\delta_{1})I \\
- \frac{dH}{dt} &= \gamma_{H}\theta_{1}I - (\gamma_{DH}\delta_{2} + \gamma_{IH}(1-\delta_{2}))H \\
- \frac{dF}{dt} &= \gamma_{D}(1-\theta_{1})\delta_{1}I + \gamma_{DH}\delta_{2}H - \gamma_{F}F \\
- \frac{dR}{dt} &= \gamma_{I}(1-\theta_{1})(1-\delta_{1})I + \gamma_{IH}(1-\delta_{2})H + \gamma_{F}F.
-
-with :math:`\beta_{F}(t) = \beta_{F}` if :math:`t > c` and :math:`0` otherwise. We use a slightly modified version by replacing the delta function with a sigmoid function namely, the logistic function
-
-.. math::
-
- \beta_{F}(t) = \beta_{F} \left(1 - \frac{1}{1 + \exp(-\kappa (t - c))} \right)
-
-A brief example (from [3]) is given here with a slightly more in depth example in :ref:`estimate2`.
-
-.. ipython::
-
- In [1]: import numpy
-
- In [1]: from pygom import common_models
-
- In [1]: x0 = [1.0, 3.0/200000.0, 0.0, 0.0, 0.0, 0.0, 0.0]
-
- In [1]: t = numpy.linspace(1, 25, 100)
-
- In [1]: ode = common_models.Legrand_Ebola_SEIHFR([
- ...: ('beta_I',0.588),
- ...: ('beta_H',0.794),
- ...: ('beta_F',7.653),
- ...: ('omega_I',10.0/7.0),
- ...: ('omega_D',9.6/7.0),
- ...: ('omega_H',5.0/7.0),
- ...: ('omega_F',2.0/7.0),
- ...: ('alphaInv',7.0/7.0),
- ...: ('delta',0.81),
- ...: ('theta',0.80),
- ...: ('kappa',300.0),
- ...: ('interventionTime',7.0)
- ...: ])
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t)
-
- @savefig common_models_seihfr.png
- In [1]: ode.plot()
-
-Note also that we have again standardized so that the number of susceptible is 1 and equal to the whole population, i.e. :math:`N` does not exist in our set of ode's as defined in :mod:`.common_models`.
-
diff --git a/doc/doc_to_sort/common_models/Lorenz.rst b/doc/doc_to_sort/common_models/Lorenz.rst
deleted file mode 100644
index cc03eac3..00000000
--- a/doc/doc_to_sort/common_models/Lorenz.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-:func:`.Lorenz`
-===============
-
-The Lorenz attractor [Lorenz1963]_ defined by the equations
-
-.. math::
-
- \frac{dx}{dt} &= \sigma (y-x) \\
- \frac{dy}{dt} &= x (\rho - z) - y \\
- \frac{dz}{dt} &= xy - \beta z
-
-A classic example is
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: t = numpy.linspace(0, 100, 20000)
-
- In [1]: ode = common_models.Lorenz({'beta':8.0/3.0, 'sigma':10.0, 'rho':28.0})
-
- In [1]: ode.initial_values = ([1., 1., 1.], t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- In [1]: f = plt.figure()
-
- In [1]: plt.plot(solution[:,0], solution[:,2]);
-
- @savefig common_models_Lorenz.png
- In [1]: plt.show()
-
-
diff --git a/doc/doc_to_sort/common_models/Lotka_Volterra.rst b/doc/doc_to_sort/common_models/Lotka_Volterra.rst
deleted file mode 100644
index fbf65c4b..00000000
--- a/doc/doc_to_sort/common_models/Lotka_Volterra.rst
+++ /dev/null
@@ -1,91 +0,0 @@
-:func:`.Lotka_Volterra`
-=======================
-
-A standard Lotka-Volterra (preditor and prey) model with two states and four parameters [Lotka1920]_.
-
-.. math::
-
- \frac{dx}{dt} &= \alpha x - cxy \\
- \frac{dy}{dt} &= -\delta y + \gamma xy
-
-with both birth and death processes.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: x0 = [2.0, 6.0]
-
- In [1]: ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':2, 'gamma':6})
-
- In [1]: ode.initial_values = (x0, 0)
-
- In [1]: t = numpy.linspace(0.1, 100, 10000)
-
- In [1]: solution = ode.integrate(t)
-
- @savefig common_models_Lotka_Volterra.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
-
-Then we generate the graph at `Wolfram Alpha `_ with varying initial conditions.
-
-.. ipython::
-
- In [1]: x1List = numpy.linspace(0.2, 2.0, 5)
-
- In [1]: x2List = numpy.linspace(0.6, 6.0, 5)
-
- In [1]: fig = plt.figure()
-
- In [1]: solutionList = list()
-
- In [1]: ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':2, 'gamma':6})
-
- In [1]: for i in range(len(x1List)):
- ...: ode.initial_values = ([x1List[i], x2List[i]], 0)
- ...: solutionList += [ode.integrate(t)]
-
- In [1]: for i in range(len(x1List)): plt.plot(solutionList[i][100::,0], solutionList[i][100::,1], 'b')
-
- In [1]: plt.xlabel('x')
-
- In [1]: plt.ylabel('y')
-
- @savefig common_models_Lotka_Volterra_initial_condition.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-We also know that the system has the critical points at :math:`x = \delta / \gamma` and :math:`y=\alpha / c`. If we changes the parameters in such a way that the ration between :math:`x` and :math:`y` remains the same, then we get a figure as below
-
-.. ipython::
-
- In [1]: cList = numpy.linspace(0.1, 2.0, 5)
-
- In [1]: gammaList = numpy.linspace(0.6, 6.0, 5)
-
- In [1]: fig = plt.figure()
-
- In [1]: for i in range(len(x1List)):
- ...: ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':cList[i], 'gamma':gammaList[i]})
- ...: ode.initial_values = (x0, 0)
- ...: solutionList += [ode.integrate(t)]
-
- In [1]: for i in range(len(cList)): plt.plot(solutionList[i][100::,0], solutionList[i][100::,1])
-
- In [1]: plt.xlabel('x')
-
- In [1]: plt.ylabel('y')
-
- @savefig common_models_Lotka_Volterra_critical_point.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-where all the cycles goes through the same points.
diff --git a/doc/doc_to_sort/common_models/Lotka_Volterra_4State.rst b/doc/doc_to_sort/common_models/Lotka_Volterra_4State.rst
deleted file mode 100644
index 33b75b76..00000000
--- a/doc/doc_to_sort/common_models/Lotka_Volterra_4State.rst
+++ /dev/null
@@ -1,52 +0,0 @@
-:func:`.Lotka_Volterra_4State`
-==============================
-
-The Lotka-Volterra model with four states and three parameters [Lotka1920]_, explained by the following three transitions
-
-.. math::
-
- \frac{da}{dt} &= k_{0} a x \\
- \frac{dx}{dt} &= k_{0} a x - k_{1} x y \\
- \frac{dy}{dt} &= k_{1} x y - k_{2} y \\
- \frac{db}{dt} &= k_{2} y.
-
-First, we show the deterministic approach. Then we also show the different process path using the parameters from [Press2007]_. Note that although the model is defined in :mod:`common_models`, it is based on outputting an :class:`OperateOdeModel` rather than :class:`SimulateOdeModel`.
-
-.. ipython::
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: from pygom import Transition, TransitionType, ode_utils, SimulateOde
-
- In [1]: import numpy
-
- In [1]: stateList = ['a', 'x', 'y', 'b']
-
- In [1]: paramList = ['k0', 'k1', 'k2']
-
- In [1]: transitionList = [
- ...: Transition(origin='a', destination='x', equation='k0*a*x', transition_type=TransitionType.T),
- ...: Transition(origin='x', destination='y', equation='k1*x*y', transition_type=TransitionType.T),
- ...: Transition(origin='y', destination='b', equation='k2*y', transition_type=TransitionType.T)
- ...: ]
-
- In [1]: ode = SimulateOde(stateList, paramList, transition=transitionList)
-
- In [1]: x0 = [150.0, 10.0, 10.0, 0.0]
-
- In [1]: t = numpy.linspace(0, 15, 100)
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: ode.parameters = [0.01, 0.1, 1.0]
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_Lotka_Volterra_4State.png
- In [1]: ode.plot()
-
- In [1]: simX, simT = ode.simulate_jump(t[1::], 5, full_output=True)
-
- @savefig common_models_Lotka_Volterra_Sim.png
- In [1]: ode.plot(simX, simT)
-
diff --git a/doc/doc_to_sort/common_models/Robertson.rst b/doc/doc_to_sort/common_models/Robertson.rst
deleted file mode 100644
index 0e90755b..00000000
--- a/doc/doc_to_sort/common_models/Robertson.rst
+++ /dev/null
@@ -1,75 +0,0 @@
-:func:`.Robertson`
-==================
-
-The Robertson problem [Robertson1966]_
-
-.. math::
-
- \frac{dy_{1}}{dt} &= -0.04 y_{1} + 1 \cdot 10^{4} y_{2} y_{3} \\
- \frac{dy_{2}}{dt} &= 0.04 y_{1} - 1 \cdot 10^{4} y_{2} y_{3} + 3 \cdot 10^{7} y_{2}^{2} \\
- \frac{dy_{3}}{dt} &= 3 \cdot 10^{7} y_{2}^{2}.
-
-This is a problem that describes an autocatalytic reaction. One of those commonly used to test stiff ode solvers. As the parameters in the literature is fixed, we show here how to define the states in a slightly more compact format
-
-.. ipython::
-
- In [1]: from pygom import DeterministicOde, Transition, TransitionType
-
- In [1]: import numpy
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: t = numpy.append(0, 4*numpy.logspace(-6, 6, 1000))
-
- In [1]: # note how we define the states
-
- In [1]: stateList = ['y1:4']
-
- In [1]: paramList = []
-
- In [1]: transitionList = [
- ...: Transition(origin='y1', destination='y2', equation='0.04*y1', transition_type=TransitionType.T),
- ...: Transition(origin='y2', destination='y1', equation='1e4*y2*y3', transition_type=TransitionType.T),
- ...: Transition(origin='y2', destination='y3', equation='3e7*y2*y2', transition_type=TransitionType.T)
- ...: ]
-
- In [1]: ode = DeterministicOde(stateList, paramList, transition=transitionList)
-
- In [1]: ode.initial_values = ([1.0, 0.0, 0.0], t[0])
-
- In [1]: solution, output = ode.integrate(t[1::], full_output=True)
-
- In [1]: f, axarr = plt.subplots(1, 3)
-
- In [1]: for i in range(3):
- ...: axarr[i].plot(t, solution[:,i])
- ...: axarr[i].set_xscale('log')
-
- In [1]: f.tight_layout();
-
- @savefig common_models_Robertson_1.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-To simplify even further, we can use `y` with the corresponding subscript directly instead of `y1,y2,y3`. Again, we do not have any parameters as they are hard coded into our models.
-
-.. ipython::
-
- In [1]: stateList = ['y1:4']
-
- In [1]: transitionList = [
- ...: Transition(origin='y[0]', destination='y[1]', equation='0.04*y[0]', transition_type=TransitionType.T),
- ...: Transition(origin='y[1]', destination='y[0]', equation='1e4*y[1]*y[2]', transition_type=TransitionType.T),
- ...: Transition(origin='y[1]', destination='y[2]', equation='3e7*y[1]*y[1]', transition_type=TransitionType.T)
- ...: ]
-
- In [1]: ode = DeterministicOde(stateList, paramList, transition=transitionList)
-
- In [1]: ode.initial_values =([1.0, 0.0, 0.0], t[0])
-
- In [1]: solution2 = ode.integrate(t[1::])
-
- In [1]: numpy.max(solution - solution2)
-
-and we have the identical solution as shown in the last line above.
diff --git a/doc/doc_to_sort/common_models/SEIR.rst b/doc/doc_to_sort/common_models/SEIR.rst
deleted file mode 100644
index 0cb5f9cc..00000000
--- a/doc/doc_to_sort/common_models/SEIR.rst
+++ /dev/null
@@ -1,36 +0,0 @@
-:func:`.SEIR`
-=============
-
-A natural extension to the SIR is the SEIR model. An extra parameter :math:`\alpha`, which is the inverse of the incubation period is introduced.
-
-.. math::
-
- \frac{dS}{dt} &= -\beta SI \\
-
- \frac{dE}{dt} &= \beta SI - \alpha E \\
-
- \frac{dI}{dt} &= \alpha E - \gamma I \\
-
- \frac{dR}{dt} &= \gamma I
-
-We use the parameters from [Aron1984] here to generate our plots, which does not yield a *nice* and *sensible* epidemic curve as the birth and death processes are missing.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: ode = common_models.SEIR({'beta':1800, 'gamma':100, 'alpha':35.84})
-
- In [1]: t = numpy.linspace(0, 50, 1001)
-
- In [1]: x0 = [0.0658, 0.0007, 0.0002, 0.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_seir.png
- In [1]: ode.plot()
-
diff --git a/doc/doc_to_sort/common_models/SEIR_Birth_Death.rst b/doc/doc_to_sort/common_models/SEIR_Birth_Death.rst
deleted file mode 100644
index 970231aa..00000000
--- a/doc/doc_to_sort/common_models/SEIR_Birth_Death.rst
+++ /dev/null
@@ -1,36 +0,0 @@
-:func:`.SEIR_Birth_Death`
-=========================
-
-Extending it to also include birth death process with equal rate :math:`\mu`
-
-.. math::
-
- \frac{dS}{dt} &= \mu - \beta SI - \mu S \\
- \frac{dE}{dt} &= \beta SI - (\mu + \alpha) E \\
- \frac{dI}{dt} &= \alpha E - (\mu + \gamma) I \\
- \frac{dR}{dt} &= \gamma I
-
-Same parameters value taken from [Aron1984]_ as the SEIR example above is used here. Observe how the introduction of a birth and a death process changes the graph even though the rest of the parameters remains the same.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: import numpy
-
- In [1]: ode = common_models.SEIR_Birth_Death({'beta':1800, 'gamma':100, 'alpha':35.84, 'mu':0.02})
-
- In [1]: t = numpy.linspace(0, 50, 1001)
-
- In [1]: x0 = [0.0658, 0.0007, 0.0002, 0.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::], full_output=True)
-
- @savefig common_models_seir_bd.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
\ No newline at end of file
diff --git a/doc/doc_to_sort/common_models/SEIR_Birth_Death_Periodic.rst b/doc/doc_to_sort/common_models/SEIR_Birth_Death_Periodic.rst
deleted file mode 100644
index 1f40f887..00000000
--- a/doc/doc_to_sort/common_models/SEIR_Birth_Death_Periodic.rst
+++ /dev/null
@@ -1,69 +0,0 @@
-:func:`.SEIR_Birth_Death_Periodic`
-==================================
-
-Now extending the SEIR to also have periodic contact, as in [Aron1984]_.
-
-.. math::
-
- \frac{dS}{dt} &= \mu - \beta(t)SI - \mu S \\
- \frac{dE}{dt} &= \beta(t)SI - (\mu + \alpha) E \\
- \frac{dI}{dt} &= \alpha E - (\mu + \gamma) I \\
- \frac{dR}{dt} &= \gamma I.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: ode = common_models.SEIR_Birth_Death_Periodic({'beta_0':1800, 'beta_1':0.2, 'gamma':100, 'alpha':35.84, 'mu':0.02})
-
- In [1]: t = numpy.linspace(0, 50, 1001)
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: x0 = [0.0658, 0.0007, 0.0002, 0.0]
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_seir_bd_periodic1.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
-
-The periodicity is obvious when looking at the the plot between states :math:`S` and :math:`E`, in logarithmic scale.
-
-.. ipython::
-
- In [1]: fig = plt.figure();
-
- In [1]: plt.plot(numpy.log(solution[:,0]), numpy.log(solution[:,1]));
-
- In [1]: plt.xlabel('log of S');
-
- In [1]: plt.ylabel('log of E');
-
- @savefig common_models_seir_bd_periodic2.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-Similarly, we can see the same thing between the states :math:`E` and :math:`I`.
-
-.. ipython::
-
- In [1]: fig = plt.figure();
-
- In [1]: plt.plot(numpy.log(solution[:,1]), numpy.log(solution[:,2]));
-
- In [1]: plt.xlabel('log of E');
-
- In [1]: plt.ylabel('log of I');
-
- @savefig common_models_seir_bd_periodic3.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
diff --git a/doc/doc_to_sort/common_models/SEIR_Multiple.rst b/doc/doc_to_sort/common_models/SEIR_Multiple.rst
deleted file mode 100644
index 2c1daa3c..00000000
--- a/doc/doc_to_sort/common_models/SEIR_Multiple.rst
+++ /dev/null
@@ -1,49 +0,0 @@
-:func:`.SEIR_Multiple`
-======================
-
-Multiple SEIR coupled together, without any birth death process.
-
-.. math::
-
- \frac{dS_{i}}{dt} &= dN_{i} - dS_{i} - \lambda_{i}S_{i} \\
- \frac{dE_{i}}{dt} &= \lambda_{i}S_{i} - (d+\epsilon)E_{i} \\
- \frac{dI_{i}}{dt} &= \epsilon E_{i} - (d+\gamma) I_{i} \\
- \frac{dR_{i}}{dt} &= \gamma I_{i} - dR_{i}
-
-where
-
-.. math::
-
- \lambda_{i} = \sum_{j=1}^{n} \beta_{i,j} I_{j} (1\{i\neq j\} p)
-
-with :math:`n` being the number of patch and :math:`p` the coupled factor.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [2]: import numpy
-
- In [3]: paramEval = {'beta_00':0.0010107,'beta_01':0.0010107,'beta_10':0.0010107,
- ...: 'beta_11':0.0010107,'d':0.02,'epsilon':45.6,'gamma':73.0,
- ...: 'N_0':10**6,'N_1':10**6,'p':0.01}
-
- In [4]: x0 = [36139.3224081278, 422.560577637822, 263.883351688369, 963174.233662546]
-
- In [5]: ode = common_models.SEIR_Multiple(param=paramEval)
-
- In [6]: t = numpy.linspace(0, 40, 100)
-
- In [7]: x01 = []
-
- In [8]: for s in x0:
- ...: x01 += 2*[s]
-
- In [9]: ode.initial_values = (numpy.array(x01, float),t[0])
-
- In [10]: solution, output = ode.integrate(t[1::], full_output=True)
-
- @savefig common_models_seir_multiple.png
- In [11]: ode.plot()
-
-The initial conditions are those derived by using the stability condition as stated in [Lloyd1996]_ while the notations is taken from [Brauer2008]_.
diff --git a/doc/doc_to_sort/common_models/SIR.rst b/doc/doc_to_sort/common_models/SIR.rst
deleted file mode 100644
index 77cef848..00000000
--- a/doc/doc_to_sort/common_models/SIR.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-:func:`.SIR`
-============
-
-A standard SIR model defined by the equations
-
-.. math::
-
- \frac{dS}{dt} &= -\beta SI \\
- \frac{dI}{dt} &= \beta SI - \gamma I \\
- \frac{dR}{dt} &= \gamma I
-
-Note that the examples and parameters are taken from [Brauer2008]_, namely Figure 1.4. Hence, the first one below may not appear to make much sense.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: ode = common_models.SIR({'beta':3.6, 'gamma':0.2})
-
- In [1]: t = numpy.linspace(0, 730, 1001)
-
- In [1]: N = 7781984.0
-
- In [1]: x0 = [1.0, 10.0/N, 0.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_sir.png
- In [1]: ode.plot()
-
-Now we have the more sensible plot, where the initial susceptibles is only a fraction of 1.
-
-.. ipython::
-
- In [1]: x0 = [0.065, 123*(5.0/30.0)/N, 0.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_sir_realistic.png
- In [1]: ode.plot()
-
diff --git a/doc/doc_to_sort/common_models/SIR_Birth_Death.rst b/doc/doc_to_sort/common_models/SIR_Birth_Death.rst
deleted file mode 100644
index 5ddc9cf9..00000000
--- a/doc/doc_to_sort/common_models/SIR_Birth_Death.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-:func:`.SIR_Birth_Death`
-========================
-
-Next, we look at an SIR model with birth death
-
-.. math::
-
- \frac{dS}{dt} &= B -\beta SI - \mu S \\
- \frac{dI}{dt} &= \beta SI - \gamma I - \mu I \\
- \frac{dR}{dt} &= \gamma I
-
-
-Continuing from the example above, but now with a much longer time frame. Note that the birth and death rate are the same.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: B = 126372.0/365.0
-
- In [1]: N = 7781984.0
-
- In [1]: ode = common_models.SIR_Birth_Death({'beta':3.6, 'gamma':0.2, 'B':B/N, 'mu':B/N})
-
- In [1]: t = numpy.linspace(0, 35*365, 10001)
-
- In [1]: x0 = [0.065, 123.0*(5.0/30.0)/N, 0.0]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_sir_bd.png
- In [1]: ode.plot()
-
diff --git a/doc/doc_to_sort/common_models/SIS.rst b/doc/doc_to_sort/common_models/SIS.rst
deleted file mode 100644
index cb1cbe70..00000000
--- a/doc/doc_to_sort/common_models/SIS.rst
+++ /dev/null
@@ -1,34 +0,0 @@
-:func:`.SIS`
-============
-
-A standard SIS model without the total population :math:`N`. We assume here that :math:`S + I = N` so we can always normalize to 1. Evidently, the state :math:`S` is not required for understanding the model because it is a deterministic function of state :math:`I`.
-
-.. math::
-
- \frac{dS}{dt} &= -\beta S I + \gamma I \\
- \frac{dI}{dt} &= \beta S I - \gamma I.
-
-An example would be
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: import numpy
-
- In [1]: ode = common_models.SIS({'beta':0.5,'gamma':0.2})
-
- In [1]: t = numpy.linspace(0, 20, 101)
-
- In [1]: x0 = [1.0, 0.1]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_sis.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
\ No newline at end of file
diff --git a/doc/doc_to_sort/common_models/SIS_Periodic.rst b/doc/doc_to_sort/common_models/SIS_Periodic.rst
deleted file mode 100644
index 513fbe73..00000000
--- a/doc/doc_to_sort/common_models/SIS_Periodic.rst
+++ /dev/null
@@ -1,33 +0,0 @@
-:func:`.SIS_Periodic`
-=====================
-
-Now we look at an extension of the SIS model by incorporating periodic contact rate. Note how our equation is defined by a single ode for state **I**.
-
-.. math::
-
- \frac{dI}{dt} = (\beta(t)N - \alpha) I - \beta(t)I^{2}
-
-where :math:`\beta(t) = 2 - 1.8 \cos(5t)`. As the name suggests, it achieves a (stable) periodic solution. Note how the plots have two sub-graphs, where :math:`\tau` is in fact our time component which we have taken out of the original equation when converting it to a automonous system.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: import numpy
-
- In [1]: ode = common_models.SIS_Periodic({'alpha':1.0})
-
- In [1]: t = numpy.linspace(0, 10, 101)
-
- In [1]: x0 = [0.1,0.]
-
- In [1]: ode.initial_values = (x0, t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_sis_periodic.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
\ No newline at end of file
diff --git a/doc/doc_to_sort/common_models/vanDelPol.rst b/doc/doc_to_sort/common_models/vanDelPol.rst
deleted file mode 100644
index 0934b43c..00000000
--- a/doc/doc_to_sort/common_models/vanDelPol.rst
+++ /dev/null
@@ -1,78 +0,0 @@
-:func:`.vanDelPol`
-==================
-
-The van Del Pol oscillator [vanderpol1926]_
-
-.. math::
-
- \frac{dx}{dt} &= \sigma (y-x) \\
- \frac{dy}{dt} &= x (\rho - z) - y \\
- \frac{dz}{dt} &= xy - \beta z
-
-A classic example is
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [1]: import numpy
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: t = numpy.linspace(0, 20, 1000)
-
- In [1]: ode = common_models.vanDelPol({'mu':1.0})
-
- In [1]: ode.initial_values = ([2.0, 0.0], t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- @savefig common_models_vanDelPol.png
- In [1]: ode.plot()
-
- In [1]: plt.close()
-
- In [1]: f = plt.figure()
-
- In [1]: plt.plot(solution[:,0], solution[:,1]);
-
- @savefig common_models_vanDelPol_yprime_y_1.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-When we change the value, as per `Wolfram `_
-
-.. ipython::
-
- In [1]: t = numpy.linspace(0, 100, 1000)
-
- In [1]: ode.parameters = {'mu':1.0}
-
- In [1]: ode.initial_values = ([0.0, 0.2], t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- In [1]: f = plt.figure()
-
- In [1]: plt.plot(solution[:,0],solution[:,1]);
-
- @savefig common_models_vanDelPol_yprime_y_2.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
- In [1]: ode.parameters = {'mu':0.2}
-
- In [1]: ode.initial_values = ([0.0, 0.2], t[0])
-
- In [1]: solution = ode.integrate(t[1::])
-
- In [1]: f = plt.figure()
-
- In [1]: plt.plot(solution[:,0], solution[:,1]);
-
- @savefig common_models_vanDelPol_yprime_y_3.png
- In [1]: plt.show()
-
- In [1]: plt.close()
diff --git a/doc/doc_to_sort/epi.rst b/doc/doc_to_sort/epi.rst
deleted file mode 100644
index 757c0db5..00000000
--- a/doc/doc_to_sort/epi.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-.. _epi:
-
-************************
-Simple Epidemic Analysis
-************************
-
-A common application of ordinary differential equations is in the field of epidemiology modeling. More concretely, compartmental models that is used to describe disease progression. We demonstrate some of the simple algebraic analysis one may wish to take when given a compartment model. Our use one of the simplest model, an SIR model with birth and death processes, which is an extension of the one in :ref:`sir`. First, we initialize the model below.
-
-.. ipython::
-
- In [1]: from pygom import common_models
-
- In [2]: ode = common_models.SIR_Birth_Death()
-
- In [3]: print(ode.get_ode_eqn())
-
-
-Obtaining the R0
-================
-
-The reproduction number, also known as the :math:`R_{0}`, is the single most powerful piece and reduced piece of information available from a compartmental model. In a nutshell, it provides a single number - if the parameters are known - which can the intuitive interpretation where :math:`R_{0} = 1` defines the tipping point of an outbreak. A :math:`R_{0}` value of more than one signifies an potential outbreak where less than one indicates that the disease will stop spreading naturally.
-
-To obtain the :math:`R_{0}`, we simply have to tell the function which states represent the *disease state*, which in this case is the state **I**.
-
-.. ipython::
-
- In [1]: from pygom.model.epi_analysis import *
-
- In [2]: print(R0(ode, 'I'))
-
-Algebraic R0
-============
-
-We may also wish to get the :math:`R_{0}` in pure algebraic term. This can be achieved by the following few lines. Note that the result below is slightly different from the one above. The difference is due to the internal working of the functions, where :func:`getR0` computes the disease-free equilibrium value for the states and substitute them back into the equation.
-
-.. ipython::
-
- In [1]: F, V = disease_progression_matrices(ode, 'I')
-
- In [2]: e = R0_from_matrix(F, V)
-
- In [3]: print(e)
-
-
-To replicate the output before, we have to find the values where the disease-free equilibrium will be achieved. Substitution can then be performed to retrieve :math:`R_{0}` in pure parameters.
-
-.. ipython::
-
- In [1]: dfe = DFE(ode, ['I'])
-
- In [2]: print(dfe)
-
- In [3]: print(e[0].subs(dfe))
-
diff --git a/doc/doc_to_sort/epijson.rst b/doc/doc_to_sort/epijson.rst
deleted file mode 100644
index e1ce9cf3..00000000
--- a/doc/doc_to_sort/epijson.rst
+++ /dev/null
@@ -1,42 +0,0 @@
-.. _epijson:
-
-******************************
-Reading and using EpiJSON data
-******************************
-
-Epidemiology data is complicated due to the many different stages a patient can go through and whether a modeling technique is applicable depends heavily on the recording of data. EpiJSON is a framework whih tries to captures all the information [Finnie2016]_, in a JSON format as the name suggests.
-
-This package provides the functionality to process EpiJSON data. Due to the nature of this package, modeling of ode, it processes the data file with this in mind. The output is therefore in the cumulative form as default, shown below, in a :class:`pandas.DataFrame` format. The input can be in a string format, a file or already a :class:`dict`.
-
-.. ipython::
-
- In [1]: from pygom.loss.read_epijson import epijson_to_data_frame
-
- In [2]: import pkgutil
-
- In [3]: data = pkgutil.get_data('pygom', 'data/eg1.json')
-
- In [3]: df = epijson_to_data_frame(data)
-
- In [4]: print(df)
-
-Given that the aim of loading the data is usually for model fitting, we allow EpiJSON as input directly to the loss class :class:`pygom.loss.EpijsonLoss` which uses the Poisson loss under the hood.
-
-.. ipython::
-
- In [1]: from pygom.model import common_models
-
- In [2]: from pygom.loss.epijson_loss import EpijsonLoss
-
- In [3]: ode = common_models.SIR([0.5, 0.3])
-
- In [4]: obj = EpijsonLoss([0.005, 0.03], ode, data, 'Death', 'R', [300, 2, 0])
-
- In [5]: print(obj.cost())
-
- In [6]: print(obj._df)
-
-Given an initialized object, all the operations are inherited from :class:`pygom.loss.BaseLoss`. We demonstrated above how to calculate the cost and the rest will not be shown for brevity. The data frame is stored inside of the loss object and can be retrieved for inspection at any time point.
-
-Rather unfortunately, initial values for the states is still required, but the time is not. When the time is not supplied, then the first time point in the data will be treated as :math:`t0`. The input `Death` indicate which column of the data is used and :math:`R` the corresponding state the data belongs to.
-
diff --git a/doc/doc_to_sort/estimate1.rst b/doc/doc_to_sort/estimate1.rst
deleted file mode 100644
index 4382142c..00000000
--- a/doc/doc_to_sort/estimate1.rst
+++ /dev/null
@@ -1,115 +0,0 @@
-.. _estimate1:
-
-*******************************
-Example: Parameter Estimation 1
-*******************************
-
-Estimation under square loss
-============================
-
-To ease the estimation process when given data, a separate module :mod:`ode_loss` has been constructed for observations coming from a single state. We demonstrate how to do it via two examples, first, a standard SIR model, then the Legrand SEIHFR model from [Legrand2007]_ used for Ebola in :ref:`estimate2`.
-
-SIR Model
----------
-
-We set up an SIR model as seen previously in :ref:`sir`.
-
-.. ipython::
-
- In [176]: from pygom import SquareLoss, common_models
-
- In [179]: import numpy
-
- In [180]: import scipy.integrate
-
- In [184]: import matplotlib.pyplot
-
- In [185]: # Again, standard SIR model with 2 parameter. See the first script!
-
- In [191]: # define the parameters
-
- In [192]: paramEval = [('beta',0.5), ('gamma',1.0/3.0)]
-
- In [189]: # initialize the model
-
- In [190]: ode = common_models.SIR(paramEval)
-
-
-and we assume that we have perfect information about the :math:`R` compartment.
-
-.. ipython::
-
- In [196]: x0 = [1, 1.27e-6, 0]
-
- In [197]: # Time, including the initial time t0 at t=0
-
- In [198]: t = numpy.linspace(0, 150, 1000)
-
- In [200]: # Standard. Find the solution.
-
- In [201]: solution = scipy.integrate.odeint(ode.ode, x0, t)
-
- In [202]: y = solution[:,1:3].copy()
-
-Initialize the class with some initial guess
-
-.. ipython::
-
- In [209]: # our initial guess
-
- In [210]: theta = [0.2, 0.2]
-
- In [176]: objSIR = SquareLoss(theta, ode, x0, t[0], t[1::], y[1::,:], ['I','R'])
-
-Note that we need to provide the initial values, :math:`x_{0}` and :math:`t_{0}` differently to the observations :math:`y` and the corresponding time :math:`t`. Additionally, the state which the observation lies needs to be specified. Either a single state, or multiple states are allowed, as seen above.
-
-Difference in gradient
-----------------------
-
-We have provided two different ways of obtaining the gradient, these are explained in :ref:`gradient` in a bit more detail. First, lets see how similar the output of the two methods are
-
-.. ipython::
-
- In [22]: objSIR.sensitivity()
-
- In [25]: objSIR.adjoint()
-
-and the time required to obtain the gradient for the SIR model under :math:`\theta = (0.2,0.2)`, previously entered.
-
-.. ipython::
-
- In [22]: %timeit objSIR.sensitivity()
-
- In [25]: %timeit objSIR.adjoint()
-
-Obviously, the amount of time taken for both method is dependent on the number of observations as well as the number of states. The effect on the adjoint method as the number of observations differs can be quite evident. This is because the adjoint method is under a discretization which loops in Python where as the forward sensitivity equations are solved simply via an integration. As the number of observation gets larger, the affect of the Python loop becomes more obvious.
-
-Difference in gradient is larger when there are less observations. This is because the adjoint method use interpolations on the output of the ode between each consecutive time points. Given solution over the same length of time, fewer discretization naturally leads to a less accurate interpolation. Note that the interpolation is currently performed using univaraite spline, due to the limitation of python packages. Ideally, one would prefer to use an (adaptive) Hermite or Chebyshev interpolation. Note how we ran the two gradient functions once before timing it, that is because we only find the properties (Jacobian, gradient) of the ode during runtime.
-
-Optimized result
-----------------
-
-Then standard optimization procedures with some suitable initial guess should yield the correct result. It is important to set the boundaries for compartmental models as we know that all the parameters are strictly positive. We put a less restrictive inequality here for demonstration purpose.
-
-.. ipython::
-
- In [211]: # what we think the bounds are
-
- In [212]: boxBounds = [(0.0,2.0),(0.0,2.0)]
-
-Then using the optimization routines in :mod:`scipy.optimize`, for example, the *SLSQP* method with the gradient obtained by forward sensitivity.
-
-.. ipython::
-
- In [208]: from scipy.optimize import minimize
-
- In [213]: res = minimize(fun=objSIR.cost,
- .....: jac=objSIR.sensitivity,
- .....: x0=theta,
- .....: bounds=boxBounds,
- .....: method='SLSQP')
-
- In [214]: print(res)
-
-Other methods available in :func:`scipy.optimize.minimize` can also be used, such as the *L-BFGS-B* and *TNC*. We can also use methods that accepts the exact Hessian such as *trust-ncg* but that should not be necessary most of the time.
-
diff --git a/doc/doc_to_sort/estimate2.rst b/doc/doc_to_sort/estimate2.rst
deleted file mode 100644
index 10d27b23..00000000
--- a/doc/doc_to_sort/estimate2.rst
+++ /dev/null
@@ -1,285 +0,0 @@
-.. _estimate2:
-
-*******************************
-Example: Parameter Estimation 2
-*******************************
-
-Continuing from the :ref:`estimate1`, we show why estimating the parameters for ode's are hard. This is especially true if there is a lack of data or when there are too much flexibility in the model. Note that for reproducibility purposes, only deterministic models are used here and a fixed seed whenever a stochastic algorithm is needed.
-
-Standard SEIR model
-===================
-
-We demonstrate the estimation on the recent Ebola outbreak in West Africa. We use the number of deaths in Guinea and its corresponding time the data was recorded. These data are publicly available and they can be obtained easily on the internet such as https://github.com/cmrivers/ebola. It is stated out here for simplicity.
-
-.. ipython::
-
- In [34]: # the number of deaths and cases in Guinea
-
- In [35]: yDeath = [29.0, 59.0, 60.0, 62.0, 66.0, 70.0, 70.0, 80.0, 83.0, 86.0, 95.0, 101.0, 106.0, 108.0,
- ....: 122.0, 129.0, 136.0, 141.0, 143.0, 149.0, 155.0, 157.0, 158.0, 157.0, 171.0, 174.0,
- ....: 186.0, 193.0, 208.0, 215.0, 226.0, 264.0, 267.0, 270.0, 303.0, 305.0, 307.0, 309.0,
- ....: 304.0, 310.0, 310.0, 314.0, 319.0, 339.0, 346.0, 358.0, 363.0, 367.0, 373.0, 377.0,
- ....: 380.0, 394.0, 396.0, 406.0, 430.0, 494.0, 517.0, 557.0, 568.0, 595.0, 601.0, 632.0,
- ....: 635.0, 648.0, 710.0, 739.0, 768.0, 778.0, 843.0, 862.0, 904.0, 926.0, 997.0]
-
- In [35]: yCase = [49.0, 86.0, 86.0, 86.0, 103.0, 112.0, 112.0, 122.0, 127.0, 143.0, 151.0, 158.0,
- ....: 159.0, 168.0, 197.0, 203.0, 208.0, 218.0, 224.0, 226.0, 231.0, 235.0, 236.0,
- ....: 233.0, 248.0, 258.0, 281.0, 291.0, 328.0, 344.0, 351.0, 398.0, 390.0, 390.0,
- ....: 413.0, 412.0, 408.0, 409.0, 406.0, 411.0, 410.0, 415.0, 427.0, 460.0, 472.0,
- ....: 485.0, 495.0, 495.0, 506.0, 510.0, 519.0, 543.0, 579.0, 607.0, 648.0, 771.0,
- ....: 812.0, 861.0, 899.0, 936.0, 942.0, 1008.0, 1022.0, 1074.0, 1157.0, 1199.0, 1298.0,
- ....: 1350.0, 1472.0, 1519.0, 1540.0, 1553.0, 1906.0]
-
- In [36]: # the corresponding time
-
- In [37]: t = [0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 9.0, 10.0, 13.0, 16.0, 18.0, 20.0, 23.0, 25.0, 26.0, 29.0,
- ....: 32.0, 35.0, 40.0, 42.0, 44.0, 46.0, 49.0, 51.0, 62.0, 66.0, 67.0, 71.0, 73.0, 80.0, 86.0, 88.0,
- ....: 90.0, 100.0, 102.0, 106.0, 108.0, 112.0, 114.0, 117.0, 120.0, 123.0, 126.0, 129.0, 132.0, 135.0,
- ....: 137.0, 140.0, 142.0, 144.0, 147.0, 149.0, 151.0, 157.0, 162.0, 167.0, 169.0, 172.0, 175.0, 176.0,
- ....: 181.0, 183.0, 185.0, 190.0, 193.0, 197.0, 199.0, 204.0, 206.0, 211.0, 213.0, 218.0]
-
-Simple estimation
------------------
-
-First ,we are going to fit a standard **SEIR** model to the data. Details of the models can be found in :mod:`common_models` Defining the model as usual with some random guess on what the parameters might be, here, we choose the values to be the mid point of our feasible region (defined by our box constraints later)
-
-.. ipython::
-
- In [1]: from pygom import SquareLoss, common_models
-
- In [1]: import numpy, scipy.optimize
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: theta = numpy.array([5.0, 5.0, 5.0])
-
- In [2]: ode = common_models.SEIR(theta)
-
- In [3]: population = 1175e4
-
- In [4]: y = numpy.reshape(numpy.append(numpy.array(yCase), numpy.array(yDeath)), (len(yCase),2), 'F')/population
-
- In [5]: x0 = [1., 0., 49.0/population, 29.0/population]
-
- In [6]: t0 = t[0]
-
- In [7]: objLegrand = SquareLoss(theta, ode, x0, t0, t[1::], y[1::,:], ['I','R'], numpy.sqrt([population]*2))
-
-Then we optimize, first, assuming that the initial conditions are accurate. Some relatively large bounds are used for this particular problem.
-
-.. ipython::
-
- In [8]: boxBounds = [ (0.0,10.0), (0.0,10.0), (0.0,10.0) ]
-
- In [9]: res = scipy.optimize.minimize(fun=objLegrand.cost,
- ...: jac=objLegrand.sensitivity,
- ...: x0=theta,
- ...: bounds=boxBounds,
- ...: method='l-bfgs-b')
-
- In [10]: print(res)
-
- In [11]: f = plt.figure()
-
- @savefig ebola_seir_straight.png
- In [12]: objLegrand.plot()
-
- In [13]: plt.close()
-
-We can see from our visual confirmation that the estimated parameters are not exactly ideal. This is confirmed by the information returned from the :func:`scipy.optimize.minimize` routine, and probably caused by the poor starting point. An attempt to find a more suitable value can be done by some form of parameter space exploration. Given that the evaluation of the objective function is not expensive here, we have plenty of options to choose from. To reduce the number of packages required to build this documentation, routines from :mod:`scipy.optimize` remains our preferred option.
-
-Improved initial guess
-----------------------
-
-.. ipython::
-
- In [8]: resDE = scipy.optimize.differential_evolution(objLegrand.cost, bounds=boxBounds, polish=False, seed=20921391)
-
- In [9]: print(objLegrand.sensitivity(resDE['x']))
-
- In [10]: f = plt.figure()
-
- @savefig ebola_seir_de.png
- In [11]: objLegrand.plot()
-
- In [12]: plt.close()
-
-Looking at the output of the estimates (below this paragraph), we can see our inference on Ebola is wrong when compared to the *known* values (from field observation) even though the graphs looks *``reasonable"*. Namely, :math:`\gamma^{-1}` the third element in the vector below, our time from infectious to death, is within the expected range but :math:`\alpha^{-1}` (second element), the incubation period, is a lot higher than expected.
-
-.. ipython::
-
- In [1]: 1/resDE['x']
-
-Multimodal surface
-------------------
-
-A reason for this type of behavior is that we simply lack the information/data to make proper inference. Without data on the state :math:`E`, the parameters :math:`\beta,\alpha` for the two states :math:`I` and :math:`E` are dependent only on observations on :math:`I`. Hence, some other random combination of :math:`\beta,\alpha` that is capable of generating realization close to observations in :math:`I` is feasible. In such cases, the only requirement is that there exist some :math:`\gamma` in the feasible region that can compensate for the ill suited :math:`\beta,\alpha`. For example, we know (obtained elsewhere and not shown here) that there is another set of parameters capable of generating a similar looking curves as before. Note the reversal of magnitude in :math:`\beta` and :math:`\alpha`.
-
-.. ipython::
-
- In [11]: objLegrand.cost([3.26106524e+00, 2.24798702e-04, 1.23660721e-02])
-
- In [12]: ## objLegrand.cost([ 0.02701867, 9.00004776, 0.01031861]) # similar graph
-
- @savefig ebola_seir_prior.png
- In [13]: objLegrand.plot()
-
- In [14]: plt.close()
-
-With initial values as parameters
----------------------------------
-
-Obviously, the assumption that the whole population being susceptible is an overestimate. We now try to estimate the initial conditions of the ode as well. Given previous estimates of the parameters :math:`\hat{\beta}, \hat{\alpha}, \hat{\gamma}` it is appropriate to start our initial guess there.
-
-Furthermore, given that we now estimate the initial values for all the states, we can use the first time point as our observation. So our time begins at :math:`t = -1` where our observations include the previous initial condition, i.e. 49 and 29 for the number of cases and death at :math:`t = 0` respectively. The following code block demonstrates how we would do that; feel free to try it out yourself to see the much improved result.
-
-.. ipython::
- :verbatim:
-
- In [1]: thetaIV = theta.tolist() + x0
-
- In [2]: thetaIV[3] -= 1e-8 # to make sure that the initial guess satisfy the constraints
-
- In [3]: boxBoundsIV = boxBounds + [(0.,1.), (0.,1.), (0.,1.), (0.,1.)]
-
- In [4]: objLegrand = SquareLoss(theta, ode, x0, -1, t, y, ['I','R'], numpy.sqrt([population]*2))
-
- In [5]: resDEIV = scipy.optimize.differential_evolution(objLegrand.costIV, bounds=boxBoundsIV, polish=False, seed=20921391)
-
- In [6]: print(resDEIV)
-
- In [7]: f = plt.figure()
-
- In [8]: objLegrand.plot()
-
- In [9]: plt.close()
-
-
-Legrand Ebola SEIHFR Model
-==========================
-
-Next, we demonstrate the estimation on a model that is widely used in the recent Ebola outbreak in west Africa. Again, the model has been defined in :mod:`.common_models` already.
-
-.. ipython::
-
- In [1]: ode = common_models.Legrand_Ebola_SEIHFR()
-
- In [27]: # initial guess from the paper that studied the outbreak in Congo
-
- In [28]: theta = numpy.array([0.588,0.794,7.653, ### the beta
- ....: 10.0,9.6,5.0,2.0, ### the omega
- ....: 7.0,0.81,0.80, ### alpha, delta, theta
- ....: 100.,1.0]) ### kappa,intervention time
-
- In [29]: # initial conditions, note that we have a 0.0 at the end because the model is a non-automonous ode which we have converted the time component out
-
- In [30]: x0 = numpy.array([population, 0.0, 49.0, 0.0, 0.0, 29.0, 0.0])/population
-
- In [30]: ode.parameters = theta
-
- In [31]: ode.initial_values = (x0, t[0])
-
- In [32]: objLegrand = SquareLoss(theta, ode, x0, t[0], t[1::], y[1::,:], ['I','R'], numpy.sqrt([population]*2))
-
-Now, it is important to set additional constraints accurately because a simply box constraint is much larger than the feasible set. Namely, :math:`\omega_{I}, \omega_{D}` are the time taken from onset until end of infectious/death, which has to be bigger than :math:`\omega_{H}`, onset to hospitalization given the nature of the disease. Therefore, we create extra inequality constraints in addition to the box constraints
-
-.. ipython::
-
- In [549]: boxBounds = [
- .....: (0.001, 100.), # \beta_I
- .....: (0.001, 100.), # \beta_H
- .....: (0.001, 100.), # \beta_F
- .....: (0.001, 100.), # \omega_I
- .....: (0.001, 100.), # \omega_D
- .....: (0.001, 100.), # \omega_H
- .....: (0.001, 100.), # \omega_F
- .....: (0.001, 100.), # \alpha^{-1}
- .....: (0.0001, 1.), # \delta
- .....: (0.0001, 1.), # \theta
- .....: (0.001, 1000.), # \kappa
- .....: (0.,218.) # intervention tine
- .....: ]
-
- In [550]: cons = ({'type': 'ineq', 'fun' : lambda x: numpy.array([x[3]-x[5], x[4]-x[5]])})
-
-We can now try to find the optimal values, but because this is a difficult problem that can take a very long time without guarantee on the quality of solution
-
-.. ipython::
- :okexcept:
- :okwarning:
-
- In [213]: res = scipy.optimize.minimize(fun=objLegrand.cost,
- .....: jac=objLegrand.sensitivity,
- .....: x0=theta,
- .....: constraints=cons,
- .....: bounds=boxBounds,
- .....: method='SLSQP')
-
- In [214]: print(res)
-
- In [215]: f = plt.figure()
-
- @savefig ebola_legrand_runtime.png
- In [216]: objLegrand.plot()
-
- In [217]: plt.close()
-
-Evidently, the estimated parameters are very much unrealistic given that a lot of them are near the boundaries. It is also known from other sources that some of the epidemiology properties of Ebola, with incubation period of around 2 weeks and a mortality rate of around 80 percent.
-
-As the estimate does not appear to provide anything sensible, we also provide a set of values previously obtained (that looks semi-reasonable) here plot the epidemic curve with the observations layered on top
-
-.. ipython::
- :okexcept:
- :okwarning:
-
- In [1]: theta = numpy.array([3.96915071e-02, 1.72302620e+01, 1.99749990e+01,
- ...: 2.67759445e+01, 4.99999990e+01, 5.56122691e+00,
- ...: 4.99999990e+01, 8.51599523e+00, 9.99999000e-01,
- ...: 1.00000000e-06, 3.85807562e+00, 1.88385318e+00])
-
- In [2]: print(objLegrand.cost(theta))
-
- In [2]: solution = ode.integrate(t[1::])
-
- In [3]: f, axarr = plt.subplots(2,3)
-
- In [4]: axarr[0,0].plot(t, solution[:,0]);
-
- In [5]: axarr[0,0].set_title('Susceptible');
-
- In [6]: axarr[0,1].plot(t, solution[:,1]);
-
- In [7]: axarr[0,1].set_title('Exposed');
-
- In [8]: axarr[0,2].plot(t, solution[:,2]);
-
- In [9]: axarr[0,2].plot(t, y[:,0], 'r');
-
- In [10]: axarr[0,2].set_title('Infectious');
-
- In [11]: axarr[1,0].plot(t, solution[:,3]);
-
- In [12]: axarr[1,0].set_title('Hospitalised');
-
- In [13]: axarr[1,1].plot(t, solution[:,4]);
-
- In [14]: axarr[1,1].set_title('Awaiting Burial');
-
- In [15]: axarr[1,2].plot(t, solution[:,5]);
-
- In [16]: axarr[1,2].plot(t, y[:,1], 'r');
-
- In [17]: axarr[1,2].set_title('Removed');
-
- In [18]: f.text(0.5, 0.04, 'Days from outbreak', ha='center');
-
- In [19]: f.text(0.01, 0.5, 'Population', va='center', rotation='vertical');
-
- In [20]: f.tight_layout();
-
- @savefig ebola_seihfr_straight_prior.png
- In [21]: plt.show()
-
- In [22]: plt.close()
-
-
diff --git a/doc/doc_to_sort/faq.rst b/doc/doc_to_sort/faq.rst
deleted file mode 100644
index a57c2675..00000000
--- a/doc/doc_to_sort/faq.rst
+++ /dev/null
@@ -1,60 +0,0 @@
-.. _faq:
-
-************************
-Frequent asked questions
-************************
-
-Code runs slowly
-================
-
-This is because the package is not optimized for speed. Although the some of the main functions are lambdified using :mod:`sympy` or compiled against :mod:`cython` when available, there are many more optimization that can be done. One example is the lines:
-
-.. python:
-
- J = self.Jacobian(state,t)
- G = self.Grad(state,t)
- A = numpy.dot(J,S) + G
-
-in :func:`.DeterministicOde.evalSensitivity`. The first two operations can be inlined into the third and the third line itself can be rewritten as:
-
-.. python:
-
- G += numpy.dot(J,S)
-
-and save the explicit copy operation by :mod:`numpy` when making A. If desired, we could have also made used of the :mod:`numexpr` package that provides further speed up on elementwise operations in place of numpy.
-
-Why not compile the numeric computation form sympy against Theano
-=================================================================
-
-Setup of the package has been simplified as much as possible. If you look closely enough, you will realize that the current code generation only uses :mod:`cython` and not :mod:`f2py`. This is because we are not prepared to do all the system checks, i.e. does a fortran compiler exist, is gcc installed, was python built as a shared library etc. We are very much aware of the benefit, especially considering the possibility of GPU computation in :mod:`theano`.
-
-Why not use mpmath library throughout?
-======================================
-
-This is because we have a fair number of operations that depends on :mod:`scipy`. Obviously, we can solve ode using :mod:`mpmath` and do standard linear algebra. Unfortunately, optimization and statistics packages and routine are mostly based on :mod:`numpy`.
-
-Computing the gradient using :class:`.SquareLoss` is slow
-=========================================================
-
-It will always be slow on the first operation. This is due to the design where the initialization of the class is fast and only find derivative information/compile function during runtime. After the first calculation, things should be significantly faster.
-
-**Why some of my code is not a fortran object?**
-
-When we detec either a :math:`\exp` or a :math:`\log` in the equations, we automatically force the compile to use mpmath to ensure that we obtain the highest precision. To turn this on/off will be considered as a feature in the future.
-
-Can you not convert a non-autonumous system to an autonomous system for me automatically
-========================================================================================
-
-Although we can do that, it is not, and will not be implemented. This is to ensure that the end user such as yourself are fully aware of the equations being defined.
-
-Getting the sensitivities from :class:`.SquareLoss` did not get a speed up when I used a restricted set of parameters
-=====================================================================================================================
-
-This is because we currently evaluate the full set of sensitivities before extracting them out. Speeding this up for a restrictive set is being considered. A main reason that stopped us from implementing is that we find the symbolic gradient of the ode before compiling it. Which means that one function call to the compiled file will return the full set of sensitivities and we would only be extracting the appropriate elements from the matrix. This only amounts to a small speed up. The best method would be to compile only the necessary elements of the gradient matrix, but this would require much more work both within the code, and later on when variables are being added/deleted as all these compilation are perfromed in runtime.
-
-Why do not have the option to obtain gradient via complex differencing
-======================================================================
-
-It is currently not implemented. Feature under consideration.
-
-
diff --git a/doc/doc_to_sort/fh.rst b/doc/doc_to_sort/fh.rst
deleted file mode 100644
index 065f3285..00000000
--- a/doc/doc_to_sort/fh.rst
+++ /dev/null
@@ -1,133 +0,0 @@
-.. _fh:
-
-******************
-Example: Fitz Hugh
-******************
-
-Defining the model
-==================
-
-We are going to investigate another classic model here, the FitzHugh-Nagumo, or simply FitzHugh here. The model has already been defined in :mod:`common_models` so we can load it easily
-
-.. ipython::
-
- In [1]: from pygom import SquareLoss, common_models
-
- In [2]: import numpy
-
- In [3]: import scipy.integrate, scipy.optimize
-
- In [4]: import math,time,copy
-
- In [5]: import matplotlib.pyplot as plt
-
- In [1]: x0 = [-1.0, 1.0]
-
- In [2]: t0 = 0
-
- In [3]: # params
-
- In [4]: paramEval = [('a',0.2), ('b',0.2), ('c',3.0)]
-
- In [5]: ode = common_models.FitzHugh(paramEval)
-
- In [5]: ode.initial_values = (x0, t0)
-
-Define a set of time points and lets see how the two states :math:`V` and :math:`R` are suppose to behave.
-
-.. ipython::
-
- In [6]: t = numpy.linspace(1, 20, 30).astype('float64')
-
- In [7]: solution = ode.integrate(t)
-
- @savefig fh_plot.png
- In [8]: ode.plot()
-
-Estimate the parameters
-=======================
-
-Obtaining the correct parameters for the FitzHugh model is well known to be difficult, this is because the surface is multimodal. Although this has been shown many times in the literature, so we will omit the details. Regardless, we give it a go with some initial guess. with some luck, we will be able to recover the original parameters. First, we try it out with only one target state
-
-.. ipython::
-
- In [26]: theta = [0.5, 0.5, 0.5]
-
- In [27]: objFH = SquareLoss(theta, ode, x0, t0, t, solution[1::,1], 'R')
-
- In [28]: boxBounds = [
- ....: (0.0,5.0),
- ....: (0.0,5.0),
- ....: (0.0,5.0)
- ....: ]
-
- In [29]: res = scipy.optimize.minimize(fun=objFH.cost,
- ....: jac=objFH.sensitivity,
- ....: x0=theta,
- ....: bounds=boxBounds,
- ....: method='L-BFGS-B')
-
- In [30]: print(res)
-
-Then we try the same again but with both state as our target. Now we won't look at the iterations because they are pretty pointless.
-
-.. ipython::
-
- In [30]: objFH = SquareLoss(theta, ode, x0, t0, t, solution[1::,:], ['V','R'])
-
- In [31]: res = scipy.optimize.minimize(fun=objFH.cost,
- ....: jac=objFH.sensitivity,
- ....: x0=theta,
- ....: bounds=boxBounds,
- ....: method='L-BFGS-B')
-
- In [32]: print(res)
-
-Note how the estimates are the same, unlike other models.
-
-Estimate initial value
-======================
-
-We can further assume that we have no idea about the initial values for :math:`V` and :math:`R` as well. We also provide guesstimate to set off the optimization. The input vector :math:`\theta` must have the parameters first, then the initial values, along with the corresponding bounds.
-
-First, only a single target state, i.e. we only have observations for one of states which is :math:`R` in this case
-
-.. ipython::
-
- In [35]: objFH = SquareLoss(theta, ode, x0, t0, t, solution[1::,1], 'R')
-
- In [35]: boxBounds = [
- ....: (0.0,5.0),
- ....: (0.0,5.0),
- ....: (0.0,5.0),
- ....: (None,None),
- ....: (None,None)
- ....: ]
-
- In [36]: res = scipy.optimize.minimize(fun=objFH.costIV,
- ....: jac=objFH.sensitivityIV,
- ....: x0=theta + [-0.5,0.5],
- ....: bounds=boxBounds,
- ....: method='L-BFGS-B')
-
- In [37]: print(res)
-
-then both state as target at the same time
-
-.. ipython::
-
- In [38]: objFH = SquareLoss(theta, ode, x0, t0, t, solution[1::,:], ['V','R'])
-
- In [38]: res = scipy.optimize.minimize(fun=objFH.costIV,
- ....: jac=objFH.sensitivityIV,
- ....: x0=theta + [-0.5, 0.5],
- ....: bounds=boxBounds,
- ....: method='L-BFGS-B')
-
- In [39]: print(res)
-
-See the difference between the two estimate with the latter, both state were used, yielding superior estimates. Note that only the forward sensitivity method is implemented when estimating the initial value, and it is assumed that the starting condition for all the states are unknown.
-
-The choice of algorithm here is the **L-BFGS-B** which is a better choice because the parameter space of the FitzHugh is rough (i.e. large second derivative) as well as being multimodal. This means that the Hessian is not guaranteed to be positive definite and approximation using :math:`J^{\top}J` is poor, with :math:`J` being the Jacobian of the objective function.
-
-
diff --git a/doc/doc_to_sort/gradient.rst b/doc/doc_to_sort/gradient.rst
deleted file mode 100644
index 121bb6c3..00000000
--- a/doc/doc_to_sort/gradient.rst
+++ /dev/null
@@ -1,302 +0,0 @@
-.. _gradient:
-
-*************************************
-Gradient estimation under square loss
-*************************************
-
-Assuming that we have a set of :math:`N` observations :math:`y_{i}` at specific time points :math:`t_{i}`, :math:`i = 1,\ldots,N`, we may wish to test out a set of ode to see whether it fits to the data. The most natural way to test such *fit* is to minimize the sum of squares between our observations :math:`y` and see whether the resulting solution of the ode and the estimationed parameters makes sense.
-
-We assume that this estimation process will be tackled through a non-linear optimization point of view. However, it should be noted that such estimates can also be performed via MCMC or from a global optimization perspective. A key element in non-linear optimization is the gradient, which is the focus of this page.
-
-Multiple ways of obtaining the gradient have been implemented. All of them serve a certain purpose and may not be a viable/appropriate options depending on the type of ode. More generally, let :math:`d,p` be the number of states and paramters respectively. Then finite difference methods have a run order of :math:`O(p+1)` of the original ode, forward sensitivity require an integration of an ode of size :math:`(d+1)p` rather than :math:`d`. The adjoint method require two run of size :math:`d` in principle, but actual run time is dependent on the number of observations.
-
-For the details of the classes and methods, please refer to :ref:`mod`.
-
-Notation
-========
-
-We introduce the notations that will be used in the rest of the page, some of which may be slightly unconventional but necessary due to the complexity of the problem. Let :math:`x \in \mathbb{R}^{d}` and :math:`\theta \in \mathbb{R}^{p}` be the states and parameters respectively. The term *state* or *simulation* are used interchangeably, even though strictly speaking a state is :math:`x` whereas :math:`x(t)` is the simulation. An ode is defined as
-
-.. math::
-
- f(x,\theta) = \dot{x} = \frac{\partial x}{\partial t}
-
-and usually comes with a set of initial conditions :math:`(x_0,t_0)` where :math:`t_0 \le t_{i} \forall i`. Let :math:`g(x,\theta)` be a function that maps the set of states to the observations, :math:`g : \mathbb{R}^{d} \rightarrow \mathbb{R}^{m}`. For compartmental problems, which is our focus, :math:`\nabla_{\theta}g(x,\theta)` is usually zero and :math:`\nabla_{x}g(x,\theta)` is an identity function for some or all of the states :math:`x`. Denote :math:`l(x_{0},\theta,x)` as our cost function :math:`l : \mathbb{R}^{m} \rightarrow \mathbb{R}` and :math:`L(x_{0},\theta,x)` be the sum of :math:`l(\cdot)`. Both :math:`x` and :math:`x_{0}` are usually dropped for simplicity. We will be dealing exclusively with square loss here, which means that
-
-.. math::
-
- L(\theta) = \sum_{i=1}^{N} \left\| y_{i} - g(x(t_{i})) \right\|^{2} = \mathbf{e}^{\top} \mathbf{e}
-
-where :math:`\mathbf{e}` is the residual vector, with elements
-
-.. math::
-
- e_{i} = y_{i} - x(t_{i}).
-
-
-Model setup
-===========
-
-Again, we demonstrate the functionalities of our classes using an SIR model.
-
-.. ipython::
-
- In [1]: from pygom import SquareLoss, common_models
-
- In [2]: import copy,time,numpy
-
- In [2]: ode = common_models.SIR()
-
- In [3]: paramEval = [('beta',0.5), ('gamma',1.0/3.0) ]
-
- In [7]: # the initial state, normalized to zero one
-
- In [8]: x0 = [1., 1.27e-6, 0.]
-
- In [5]: # initial time
-
- In [6]: t0 = 0
-
- In [5]: ode.parameters = paramEval
-
- In [6]: ode.initial_values = (x0, t0)
-
- In [9]: # set the time sequence that we would like to observe
-
- In [10]: t = numpy.linspace(1, 150, 100)
-
- In [11]: numStep = len(t)
-
- In [11]: solution = ode.integrate(t)
-
- In [12]: y = solution[1::,2].copy()
-
- In [13]: y += numpy.random.normal(0, 0.1, y.shape)
-
-Now we have set up the model along with some observations, obtaining the gradient only requires the end user to put the appropriate information it into the class :class:`SquareLoss`. Given the initial guess :math:`\theta`
-
-.. ipython::
-
- In [210]: theta = [0.2, 0.2]
-
-We initialize the :class:`SquareLoss` simply as
-
-.. ipython::
-
- In [20]: objSIR = SquareLoss(theta, ode, x0, t0, t, y, 'R')
-
-where the we also have to specify the state our observations are from. Now, we demonstrate the different methods in obtaining the gradient and mathematics behind it.
-
-Forward sensitivity
-===================
-
-The forward sensitivity equations are derived by differentiating the states implicitly, which yields
-
-.. math::
-
- \frac{d\dot{x}}{d\theta} = \frac{\partial f}{\partial x}\frac{dx}{d\theta} + \frac{\partial f}{\partial \theta}.
-
-So finding the sensitivies :math:`\frac{dx}{d\theta}` simply require another integration of a :math:`p` coupled ode of :math:`d` dimension, each with the same Jacobian as the original ode. This integration is performed along with the original ode because of possible non-linearity.
-
-A direct call to the method :meth:`sensitivity ` computed the gradient
-
-.. ipython::
-
- In [33]: gradSens = objSIR.sensitivity()
-
-whereas :meth:`.jac` will allow the end user to obtain the Jacobian (of the objective function) and the residuals, the information required to get the gradient as we see next.
-
-.. ipython::
-
- In [33]: objJac, output = objSIR.jac(full_output=True)
-
-
-Gradient
-========
-
-Just the sensitivities alone are not enough to obtain the gradient, but we are :math:`90\%` there. Differentiating the loss function
-
-.. math::
-
- \frac{dL}{d\theta} &= \nabla_{\theta} \sum_{i=1}^{N}\frac{dl}{dg} \\
- &= \sum_{i=1}^{N} \frac{\partial l}{\partial x}\frac{dx}{d\theta} + \frac{\partial l}{\partial \theta} \\
- &= \sum_{i=1}^{N} \frac{\partial l}{\partial g}\frac{\partial g}{\partial x}\frac{dx}{d\theta} + \frac{\partial l}{\partial g}\frac{\partial g}{\partial \theta}
-
-via chain rule. When :math:`\frac{\partial g}{\partial \theta} = 0`, the total gradient simplifies to
-
-.. math::
-
- \frac{dL}{d\theta} = \sum_{i=1}^{N} \frac{\partial l}{\partial g}\frac{\partial g}{\partial x}\frac{dx}{d\theta}
-
-Obviously, the time indicies are dropped above but all the terms above are evaluated only at the observed time points. More concretely, this means that
-
-.. math::
-
- \frac{\partial l(x(j),\theta)}{\partial g} = \left\{ \begin{array}{ll} -2(y_{i} - x(j)) & , \; j = t_{i} \\ 0 & \; \text{otherwise} \end{array} \right.
-
-When :math:`g(\cdot)` is an identity function (which is assumed to be the case in :class:`SquareLoss`)
-
-.. math::
-
- \frac{\partial g(x(t_{i}),\theta)}{\partial x} = I_{d}
-
-then the gradient simplifies even further as it is simply
-
-.. math::
-
- \frac{dL}{d\theta} = -2\mathbf{e}^{\top}\mathbf{S}
-
-where :math:`\mathbf{e}` is the vector of residuals and :math:`\mathbf{S} = \left[\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{n}\right]` with elements
-
-.. math::
-
- \mathbf{s}_{i} = \frac{dx}{d\theta}(t_{i}),
-
-the solution of the forward sensitivies at time :math:`t_{i}`, obtained from solving the coupled ode as mentioned previously.
-
-Jacobian
-========
-
-Now note how the gradient simplifies to :math:`-2\mathbf{e}^{\top}\mathbf{S}`. Recall that a standard result in non-linear programming states that the gradient of a sum of sqaures objective function :math:`L(\theta,y,x)` is
-
-.. math::
-
- \nabla_{\theta} L(\theta,y,x) = -2(\mathbf{J}^{T} \left[\mathbf{y} - \mathbf{f}(x,\boldsymbol{\theta}) \right] )^{\top}
-
-with :math:`f(x,\theta)` our non-linear function and :math:`J` our Jacobian with elements
-
-.. math::
-
- J_{i} = \frac{\partial f(x_{i},\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}.
-
-This is exactly what we have seen previously, substituting in reveals that :math:`J = \mathbf{S}`. Hence, the Jacobian is (a necessary)by product when we wish to obtain the gradient. In fact, this is exactly how we proceed in :func:`sensitivity ` where it makes an internal call to :func:`jac ` to obtain the Jacobian first. This allows the end user to have more options when choosing which type of algorithms to use, i.e. Gauss-Newton or Levenberg-Marquardt.
-
-To check that the output is in fact the same
-
-.. ipython::
-
- In [1]: objJac.transpose().dot(-2*output['resid']) - gradSens
-
-Adjoint
-=======
-
-When the number of parameters increases, the number of sensitivies also increases. The time required scales directly with the number of parameters. We describe another method which does not depend on the number of parameters, but rather, the number of states and observations.
-
-The full derivations will not be shown here, but we aim to provide enough information to work out the steps performed in the our code. Let write our optimization problem as
-
-.. math::
-
- min_{\theta} \quad & \int_{t_{0}}^{T} l(x_{0},\theta,x(t)) dt \\
- s.t. \quad & \dot{x} = f(x,\theta)
-
-which is identical to the original problem but in a continuous setting. Now write the constrained problem in the Lagrangian form
-
-.. math::
-
- min_{\theta} \; L(\theta) + \int_{t_{0}}^{T} \lambda^{\top}(\dot{x} - f(x,\theta))
-
-with Lagrangian multiplier :math:`\lambda \ge 0`. After some algebraic manipulation, it can be shown that the total derivative of the Lagrangian function is
-
-.. math::
-
- \frac{dL}{d\theta} = \int_{t_{0}}^{T} \left(\frac{\partial l}{\partial \theta} - \lambda^{\top}\frac{\partial f}{\partial \theta} \right) dt.
-
-Using previously defined loss functions (the identity), the first term is zero and evaluating :math:`\frac{\partial f}{\partial \theta}` is trivial. What remains is the calculation of :math:`\lambda(t)` for :math:`t \in \left[t_{0},T\right]`.
-
-Although this still seem to be ill-posed problem when Looking at the Lagrangian function, one can actually obtain the *adjoint equation*, after certain assumptions and
-
-.. math::
-
- \frac{d\lambda^{\top}}{dt} = \frac{\partial l}{\partial x} - \lambda^{\top}\frac{\partial f}{\partial \theta}.
-
-which is again an integration. An unfortunate situation arise here for non-linear systems because we use the minus Jacobian in the adjoint equation. So if the eigenvalues of the Jacobian indicate that our original ode is stable, such as -1, the minus eigenvalues (now 1) implies that the adjoint equation is not stable. Therefore, one must integrate backward in time to solve the adjoint equation and it cannot be solved simultaneously as the ode, unlike the forward sensitivity equations.
-
-Given a non-linearity ode, we must store information about the states between :math:`t_{0}` and :math:`T` in order to perform the integration. There are two options, both require storing many evaluated :math:`x(j)` within the interval :math:`\left[t_{0},T\right]`. Unfortunately, only one is available; interpolation over all states and integrate using the interpolating functions. The alternative of using observed :math:`x(j)'s` at fixed points is not competitive because we are unable to use fortran routines for the integration
-
-The method of choice here to perform the adjoint calcuation is to run a forward integration, then perform an interpolation using splines with explicit knots at the observed time points.
-
-.. ipython::
-
- In [326]: odeSIRAdjoint, outputAdjoint = objSIR.adjoint(full_output=True)
-
-This is because evaluating the Jacobian may be expensive and Runge-kutta method suffers as the complexity increases. In non-linear model such as those found in epidemiology, each element of the Jacobian may be the result of a complicated equation where linear step method will shine as it makes as little function evaluation as possible.
-Note that derivations in the literature, the initial condition when evaluating the adjoint equation is :math:`\lambda(T)=0`. But in our code we used :math:`\lambda(T) = -2(y(T)-x(T))`. Recall that we have observation :math:`y(T)` and simulation :math:`x(T)`, so that the adjoint equation evaluated at time :math:`T`
-
-.. math::
-
- \frac{\partial \lambda^{\top}}{\partial t} \Big|_{T} = -2(y-f(x,\theta))\Big|_{T} - \lambda(T)\frac{\partial f}{\partial \theta}\Big|_{T}
-
-with the second term equal to zero. Integration under step size :math:`h` implies that :math:`\lambda(T) \approx \lim_{h \to 0} \lambda(T-h) = -2(y(T)-x(T))`.
-
-Time Comparison
-===============
-
-A simple time comparison between the different methods reveals that the forward sensitivity method dominates the others by a wide margin. It will be tempting to conclude that it is the best and should be the default at all times but that is not true, due to the complexity of each method mentioned previously. We leave it to the end user to find out the best method for their specific problem.
-
-.. ipython::
-
- In [319]: %timeit gradSens = objSIR.sensitivity()
-
- In [326]: %timeit odeSIRAdjoint,outputAdjoint = objSIR.adjoint(full_output=True)
-
-
-Hessian
-=======
-
-The Hessian is defined by
-
-.. math::
-
- \frac{\partial^{2} l}{\partial \theta^{2}} = \left( \frac{\partial l}{\partial x} \otimes I_{p} \right) \frac{\partial^{2} x}{\partial \theta^{2}} + \frac{\partial x}{\partial \theta}^{\top}\frac{\partial^{2} l}{\partial x^{2}}\frac{\partial x}{\partial \theta}
-
-where :math:`\otimes` is the Kronecker product. Note that :math:`\nabla_{\theta} x` is the sensitivity and the second order sensitivities can be found again via the forward method, which involve another set of ode's, namely the forward-forward sensitivities
-
-.. math::
-
- \frac{\partial}{\partial t}\left(\frac{\partial^{2} x}{\partial \theta^{2}}\right) = \left( \frac{\partial f}{\partial x} \otimes I_{p} \right) \frac{\partial^{2} x}{\partial \theta^{2}} + \left( I_{d} \otimes \frac{\partial x}{\partial \theta}^{\top} \right) \frac{\partial^{2} f}{\partial x^{2}} \frac{\partial x}{\partial \theta}.
-
-From before, we know that
-
-.. math::
-
- \frac{\partial l}{\partial x} = (-2y+2x) \quad and \quad \frac{\partial^{2} l}{\partial x^{2}} = 2I_{d}
-
-so our Hessian reduces to
-
-.. math::
-
- \frac{\partial^{2} l}{\partial \theta^{2}} = \left( \left(-2y+2x\right) \otimes I_{p} \right) \frac{\partial^{2} x}{\partial \theta^{2}} + 2S^{\top}S,
-
-where the second term is a good approximation to the Hessian as mentioned previously. This is the only implementation in place so far even though obtaining the estimate this way is relatively slow.
-
-Just to demonstate how it works, lets look at the Hessian at the optimal point. First, we obtain the optimal value
-
-.. ipython::
-
- In [211]: import scipy.linalg,scipy.optimize
-
- In [212]: boxBounds = [(0.0, 2.0), (0.0, 2.0)]
-
- In [213]: res = scipy.optimize.minimize(fun=objSIR.cost,
- .....: jac=objSIR.sensitivity,
- .....: x0=theta,
- .....: bounds=boxBounds,
- .....: method='L-BFGS-B')
-
-Then compare again the least square estimate of the covariance matrix against our version
-
-.. ipython::
-
- In [211]: resLS, cov_x, infodict, mesg, ier = scipy.optimize.leastsq(func=objSIR.residual, x0=res['x'], full_output=True)
-
- In [212]: HJTJ, outputHJTJ = objSIR.hessian(full_output=True)
-
- In [311]: print(scipy.linalg.inv(HJTJ))
-
- In [312]: print(cov_x)
-
-also note the difference between the Hessian and the approximation using the Jacobian, which is in fact what the least squares routine uses.
-
-.. ipython::
-
- In [313]: print(scipy.linalg.inv(outputHJTJ['JTJ']))
diff --git a/doc/doc_to_sort/initialGuess.rst b/doc/doc_to_sort/initialGuess.rst
deleted file mode 100644
index 97e6a037..00000000
--- a/doc/doc_to_sort/initialGuess.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-.. _initialGuess:
-
-*******************************************
-Obtaining good initial value for parameters
-*******************************************
-
-Function Interpolation
-======================
-
-When we want to fit the model to data, one of the necessary steps is to supply the optimization procedure a good set of initial guess for the parameters :math:`\theta`. This may be a challenge when we do have a good understanding of the process we are trying to model i.e. infectious disease may all follow the same SIR process but with vastly different incubation period.
-
-A method to obtain such initial guess based on the collocation is available in this package. A restriction is that data must be present for all states. We demonstrate this using the FitzHugh-Nagumo model.
-
-
-.. ipython::
-
- In [1]: from pygom import SquareLoss, common_models, get_init
-
- In [2]: import numpy
-
- In [3]: x0 = [-1.0, 1.0]
-
- In [4]: t0 = 0
-
- In [5]: # params
-
- In [6]: paramEval = [('a',0.2), ('b',0.2), ('c',3.0)]
-
- In [7]: ode = common_models.FitzHugh(paramEval)
-
- In [8]: ode.initial_values = (x0, t0)
-
- In [8]: t = numpy.linspace(1, 20, 30).astype('float64')
-
- In [9]: solution = ode.integrate(t)
-
-Below, we try to find the initial guess without supplying any further information. The underlying method fits a cubic spline against the observation and tries to minimize the difference between the first derivative of the spline and the function of the ode. Varying degree of smoothness penalty is applied to the spline and the best set of parameters is the ones that yields the smallest total error, combining both the fit of the spline against data and the spline against the ode.
-
-.. ipython::
-
- In [10]: theta, sInfo = get_init(solution[1::,:], t, ode, theta=None, full_output=True)
-
- In [11]: print(theta)
-
- In [12]: print(sInfo)
-
-As seen above, we have obtained a very good guess of the parameters, in fact almost the same as the generating process. The information regarding the smoothing factor shows that the amount of penalty used is small, which is expected given that we use the solution of the ode as observations.
diff --git a/doc/doc_to_sort/mod/common_models.rst b/doc/doc_to_sort/mod/common_models.rst
deleted file mode 100644
index e193a655..00000000
--- a/doc/doc_to_sort/mod/common_models.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-
-common_models
-=============
-
-.. automodule:: pygom.model.common_models
- :members:
- :noindex:
-
diff --git a/doc/doc_to_sort/mod/confidence_interval.rst b/doc/doc_to_sort/mod/confidence_interval.rst
deleted file mode 100644
index f0af3c3c..00000000
--- a/doc/doc_to_sort/mod/confidence_interval.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-
-loss_type
-=========
-
-.. automodule:: pygom.loss.loss_type
- :members:
- :noindex:
diff --git a/doc/doc_to_sort/mod/deterministic.rst b/doc/doc_to_sort/mod/deterministic.rst
deleted file mode 100644
index fe10281e..00000000
--- a/doc/doc_to_sort/mod/deterministic.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-deterministic
-=============
-
-.. automodule:: pygom.model.deterministic
- :members:
- :noindex:
-
\ No newline at end of file
diff --git a/doc/doc_to_sort/mod/epi_analysis.rst b/doc/doc_to_sort/mod/epi_analysis.rst
deleted file mode 100644
index 751d846b..00000000
--- a/doc/doc_to_sort/mod/epi_analysis.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-epi_analysis
-============
-
-.. automodule:: pygom.model.epi_analysis
- :members:
- :noindex:
-
\ No newline at end of file
diff --git a/doc/doc_to_sort/mod/get_init.rst b/doc/doc_to_sort/mod/get_init.rst
deleted file mode 100644
index df85fa99..00000000
--- a/doc/doc_to_sort/mod/get_init.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-get_init
-========
-
-.. automodule:: pygom.loss.get_init
- :members:
- :noindex:
-
\ No newline at end of file
diff --git a/doc/doc_to_sort/mod/index.rst b/doc/doc_to_sort/mod/index.rst
deleted file mode 100644
index 5c5362ab..00000000
--- a/doc/doc_to_sort/mod/index.rst
+++ /dev/null
@@ -1,30 +0,0 @@
-.. _mod:
-
-
-*******************
-Code documentations
-*******************
-
-=====
-model
-=====
-
-.. toctree::
-
- common_models
- transition
- deterministic
- simulate
- epi_analysis
- odeutils
-
-====
-loss
-====
-
-.. toctree::
-
- odeloss
- losstype
- confidence_interval
- get_init
diff --git a/doc/doc_to_sort/mod/losstype.rst b/doc/doc_to_sort/mod/losstype.rst
deleted file mode 100644
index 00cbf46c..00000000
--- a/doc/doc_to_sort/mod/losstype.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-
-confidence_interval
-===================
-
-.. automodule:: pygom.loss.confidence_interval
- :members:
- :noindex:
diff --git a/doc/doc_to_sort/mod/odeloss.rst b/doc/doc_to_sort/mod/odeloss.rst
deleted file mode 100644
index 65ceea3a..00000000
--- a/doc/doc_to_sort/mod/odeloss.rst
+++ /dev/null
@@ -1,17 +0,0 @@
-ode_loss
-========
-
-These are basically the interfaces for :class:`pygom.loss.BaseLoss`
-
-.. automodule:: pygom.loss.ode_loss
- :members:
- :noindex:
-
-calculations
-============
-
-The base class which contains has all the calculation implemented
-
-.. automodule:: pygom.loss.base_loss
- :members:
- :noindex:
diff --git a/doc/doc_to_sort/mod/odeutils.rst b/doc/doc_to_sort/mod/odeutils.rst
deleted file mode 100644
index 87d7b550..00000000
--- a/doc/doc_to_sort/mod/odeutils.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-
-ode_utils
-=========
-
-.. automodule:: pygom.model.ode_utils
- :members:
- :noindex:
diff --git a/doc/doc_to_sort/mod/simulate.rst b/doc/doc_to_sort/mod/simulate.rst
deleted file mode 100644
index 7aaa9d01..00000000
--- a/doc/doc_to_sort/mod/simulate.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-
-stochastic
-==========
-
-.. automodule:: pygom.model.simulate
- :members:
- :noindex:
-
diff --git a/doc/doc_to_sort/mod/transition.rst b/doc/doc_to_sort/mod/transition.rst
deleted file mode 100644
index 2901c1e2..00000000
--- a/doc/doc_to_sort/mod/transition.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-
-transition
-==========
-
-.. automodule:: pygom.model.transition
- :members:
- :noindex:
-
diff --git a/doc/doc_to_sort/profile.rst b/doc/doc_to_sort/profile.rst
deleted file mode 100644
index dd8f0d77..00000000
--- a/doc/doc_to_sort/profile.rst
+++ /dev/null
@@ -1,433 +0,0 @@
-.. _profile:
-
-*******************************************
-Confidence Interval of Estimated Parameters
-*******************************************
-
-After obtaining the *best* fit, it is natural to report both the point estimate and the confidence level at the :math:`\alpha` level. The easiest way to do this is by invoking the normality argument and use Fisher information of the likelihood. As explained previously at the bottom of :ref:`gradient`, we can find the Hessian, :math:`\mathbf{H}`, or the approximated Hessian for the estimated parameters. The Cramer--Rao inequality, we know that
-
-.. math::
- Var(\hat{\theta}) \ge \frac{1}{I(\theta)},
-
-where :math:`I(\theta)` is the Fisher information, which is the Hessian subject to regularity condition. Given the Hessian, computing the confidence intervals is trivial. Note that this is also known as the asymptotic confidence interval where the normality comes from invoking the CLT. There are other ways of obtaining a confidence intervals, we will the ones implemented in the package. First, we will set up a SIR model as seen in :ref:`sir` which will be used throughout this page.
-
-.. ipython::
-
- In [1]: from pygom import NormalLoss, common_models
-
- In [2]: from pygom.utilR import qchisq
-
- In [3]: import numpy
-
- In [4]: import scipy.integrate
-
- In [5]: import matplotlib.pyplot as plt
-
- In [6]: import copy
-
- In [7]: ode = common_models.SIR([('beta', 0.5), ('gamma', 1.0/3.0)])
-
-and we assume that we only have observed realization from the :math:`R` compartment
-
-.. ipython::
-
- In [1]: x0 = [1, 1.27e-6, 0]
-
- In [2]: t = numpy.linspace(0, 150, 100).astype('float64')
-
- In [3]: ode.initial_values = (x0, t[0])
-
- In [4]: solution = ode.integrate(t[1::])
-
- In [5]: theta = [0.2, 0.2]
-
- In [6]: targetState = ['R']
-
- In [7]: targetStateIndex = numpy.array(ode.get_state_index(targetState))
-
- In [8]: y = solution[1::,targetStateIndex] + numpy.random.normal(0, 0.01, (len(solution[1::,targetStateIndex]), 1))
-
- In [9]: yObv = y.copy()
-
- In [10]: objSIR = NormalLoss(theta, ode, x0, t[0], t[1::], y, targetState)
-
- In [11]: boxBounds = [(1e-8, 2.0), (1e-8, 2.0)]
-
- In [12]: boxBoundsArray = numpy.array(boxBounds)
-
- In [13]: xhat = objSIR.fit(theta, lb=boxBoundsArray[:,0], ub=boxBoundsArray[:,1])
-
-Asymptotic
-==========
-
-When the estimate is obtained say, under a square loss or a normal assumption, the corresponding likelihood can be written down easily. In such a case, likelihood ratio test under a Chi--squared distribution is
-
-.. math::
-
- 2 (\mathcal{L}(\hat{\boldsymbol{\theta}}) - \mathcal{L}(\boldsymbol{\theta})) \le \chi_{1 - \alpha}^{2}(k)
-
-where :math:`1-\alpha` is the size of the confidence region and :math:`k` is the degree of freedom. The corresponding asymptotic confidence interval for parameter :math:`j` can be derived as
-
-.. math::
-
- \hat{\theta}_{j} \pm \sqrt{\chi_{1 - \alpha}^{2}(k) H_{i,i}}.
-
-A pointwise confidence interval is obtained when :math:`k = 1`. We assume in our package that a pointwise confidence interval is desired. This can be obtained simply by
-
-.. ipython::
-
- In [1]: from pygom import confidence_interval as ci
-
- In [2]: alpha = 0.05
-
- In [3]: xL, xU = ci.asymptotic(objSIR, alpha, xhat, lb=boxBoundsArray[:,0], ub=boxBoundsArray[:,1])
-
- In [4]: print(xL)
-
- In [5]: print(xU)
-
-Note that the set of bounds here is only used for check the validity of :math:`\hat{\mathbf{x}}` and not used in the calculation of the confidence intervals. Therefore the resulting output can be outside of the box constraints.
-
-Profile Likelihood
-==================
-
-Another approach to calculate the confidence interval is to tackle one parameter at a time, treating the rest of them as nuisance parameters, hence the term *profile*. Let :math:`\mathcal{L}(\boldsymbol{\theta})` be our log--likelihood with parameter :math:`\boldsymbol{\theta}`. Element :math:`\theta_{j}` is our parameter of interest and :math:`\boldsymbol{\theta}_{-j}` represents the complement such that :math:`\boldsymbol{\theta} = \theta_{j} \cup \boldsymbol{\theta}_{-j}`. For simply models such as linear regression with only regression coefficients :math:`\boldsymbol{\beta}`, then :math:`\boldsymbol{\theta} = \boldsymbol{\beta}`.
-
-To shorten the notation, let
-
-.. math:: \mathcal{L}(\boldsymbol{\theta}_{-j} \mid \theta_{j}) = \max \mathcal{L}(\boldsymbol{\theta}_{-j} \mid \theta_{j})
- :label: nuisanceOptim
-
-which is the maxima of :math:`\boldsymbol{\theta}_{-j}` given :math:`\theta_{j}`. :math:`\hat{\boldsymbol{\theta}}` denotes the MLE of the parameters as usual. The profile--likelihood based confidence interval for :math:`\theta_{j}` is defined as
-
-.. math::
-
- \theta_{j}^{U} &= \sup \left\{ \mathcal{L}(\hat{\boldsymbol{\theta}}) - \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j}) \le \frac{1}{2} \chi_{1 - \alpha}^{2}(1) \right\} \\
- \theta_{j}^{L} &= \inf \left\{ \mathcal{L}(\hat{\boldsymbol{\theta}}) - \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j}) \le \frac{1}{2} \chi_{1 - \alpha}^{2}(1) \right\}
-
-where again we have made use of the normal approximation, but without imposing symmetry. The set of equations above automatically implies that the interval width is :math:`\theta_{j}^{U} - \theta_{j}^{L}` and
-
-.. math::
-
- \mathcal{L}(\hat{\boldsymbol{\theta}}) - \frac{1}{2} \chi_{1-\alpha}^{2}(1) - \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j}) = 0.
-
-As mentioned previously, :math:`\boldsymbol{\theta}_{-j}` is the maximizer of the nuisance parameters, which has a gradient of zero. Combining this with the equation above yields a non--linear system of equations of size :math:`p`,
-
-.. math:: g(\boldsymbol{\theta}) = \left[ \begin{array}{c} \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j}) - c \\ \frac{\partial \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j})}{\partial \boldsymbol{\theta}_{-j}} \end{array} \right] = 0
- :label: obj
-
-where :math:`c = \mathcal{L}(\hat{\boldsymbol{\theta}}) + \frac{1}{2} \chi_{1-\alpha}^{2}(1)`. Solving this set of system of equations only need simple Newton like steps, possibly with correction terms as per [Venzon1988]_. We provide a function to obtain such estimate
-
-.. ipython::
- :verbatim:
-
- In [1]: xLProfile, xUProfile, xLProfileList, xUProfileList = ci.profile(objSIR, alpha, xhat, lb=boxBoundsArray[:,0], ub=boxBoundsArray[:,1], full_output=True)
-
-but unfortunately this is not accurate most of the time due to the complicated surface at locations not around :math:`\hat{\theta}`. This is a common scenario for non--linear least square problems because the Hessian is not guaranteed to be a PSD everywhere. Therefore, a safeguard is in place to obtain the :math:`\theta_{j}^{U},\theta_{j}^{L}` by iteratively by updating :math:`\theta_{j}` and find the solution to :eq:`nuisanceOptim`.
-
-Furthermore, we also provide the functions necessary to obtain the estimates such as the four below.
-
-.. ipython::
-
- In [1]: i = 0
-
- In [1]: funcF = ci._profileF(xhat, i, 0.05, objSIR)
-
- In [2]: funcG = ci._profileG(xhat, i, 0.05, objSIR)
-
- In [3]: funcGC = ci._profileGSecondOrderCorrection(xhat, i, alpha, objSIR)
-
- In [4]: funcH = ci._profileH(xhat, i, 0.05, objSIR)
-
-Where :math:`i` is the index of the parameter of interest. :func:`_profileF` is the squared norm of :eq:`obj`, which easy the optimization process for solvers which requires a converted form from system of equations to non-linear least squares. :func:`_profileG` is the systems of equations :eq:`obj`, :func:`_profileH` is the derivative of :eq:`obj`
-
-.. math::
- \nabla g(\boldsymbol{\theta}) = \left[ \begin{array}{c} \frac{\partial \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j})}{\partial \theta_{j}} \\ \frac{\partial^{2} \mathcal{L}(\boldsymbol{\theta} \mid \theta_{j})}{\partial \boldsymbol{\beta}_{-j} \partial \theta_{j}} \end{array} \right]
-
-and :func:`_profileGSecondOrderCorrection` has the second order correction [Venzon1988]_.
-
-Geometric profile likelihood
-============================
-
-Due to the difficulty in obtain a profile likelihood via the standard Newton like steps, we also provide a way to generate a similar result using the geometric structure of the likelihood surface. We follow the method in [Moolgavkar1987]_, which involves solving a set of differential equations
-
-.. math::
- \frac{d\beta_{j}}{dt} &= k g^{-1/2} \\
- \frac{d\boldsymbol{\beta}_{-j}}{dt} &= \frac{d\boldsymbol{\beta}_{-j}}{d\beta_{j}} \frac{d\beta_{j}}{dt},
-
-where :math:`k = \Phi(1-\alpha)` is the quantile we want to obtain under a normal distribution, and
-
-.. math::
-
- g = J_{\beta_{j}}^{\top} I^{\boldsymbol{\beta}} J_{\beta_{j}}, \quad J_{\beta_{j}} = \left( \begin{array}{c} 1 \\ \frac{d\boldsymbol{\beta}_{-j}}{d\beta_{j}} \end{array} \right).
-
-Here, :math:`J_{\beta_{j}}` is the Jacobian between :math:`\beta_{j}` and :math:`\boldsymbol{\beta}_{-j}` with the term
-
-.. math::
-
- \frac{d\boldsymbol{\beta}_{-j}}{d\beta_{j}} = -\left( \frac{\partial^{2} \mathcal{L}}{\partial \boldsymbol{\beta}_{-j}\partial \boldsymbol{\beta}_{-j}^{\top} } \right)^{-1} \frac{\partial^{2} \mathcal{L}}{\partial \beta_{j} \partial \beta_{-j}^{\top}}
-
-and hence the first element is :math:`1` (identity transformation). :math:`I^{\boldsymbol{\beta}}` is the Fisher information of :math:`\boldsymbol{\beta}`, which is
-
-.. math::
-
- I^{\boldsymbol{\beta}} = \frac{\partial \boldsymbol{\theta}}{\partial \boldsymbol{\beta}^{\top}} \Sigma^{\boldsymbol{\theta}(\boldsymbol{\beta})} \frac{\partial \boldsymbol{\theta}}{\partial \boldsymbol{\beta}}.
-
-It is simply :math:`\Sigma^{\boldsymbol{\beta}}` if :math:`\boldsymbol{\theta} = \boldsymbol{\beta}`. Different Fisher information can be used for :math:`\Sigma^{\boldsymbol{\beta}}` such as the expected or observed, at :math:`\hat{\boldsymbol{\beta}}` or :math:`\boldsymbol{\beta}`. After some trivial algebraic manipulation, we can show that our ode boils downs to
-
-.. math::
-
- \left[ \begin{array}{c} \frac{d\beta_{j}}{dt} \\ \frac{d\boldsymbol{\beta_{-j}}}{dt} \end{array} \right] = k \left[ \begin{array}{c} 1 \\ -A^{-1}w \end{array} \right] \left( v - w^{\top}A^{-1}w \right)^{-1/2}
-
-where the symbols on the RHS above correspond to partitions in the Fisher information
-
-.. math::
-
- I^{\boldsymbol{\beta}} = \left[ \begin{array}{cc} v & w^{\top} \\ w & A \end{array} \right].
-
-The integration is perform from :math:`t = 0` to :math:`1` and is all handled internally via :func:`geometric`
-
-.. ipython::
-
- In [1]: xLGeometric, xUGeometric, xLList, xUList = ci.geometric(objSIR, alpha, xhat, full_output=True)
-
- In [2]: print(xLGeometric)
-
- In [3]: print(xUGeometric)
-
-Bootstrap
-=========
-
-This is perhaps the favorite method to estimate confidence interval for a lot of people. Although there are many ways to implement bootstrap, semi-parametric is the only logical choice (even though the underlying assumptions may be violated at times). As we have only implemented OLS type loss functions in this package, the parametric approach seem to be inappropriate when there is no self--efficiency guarantee. Non-parametric approach requires at least a conditional independence assumption, something easily violated by our **ode**. Block bootstrap is an option but we are also aware that the errors of an **ode** can be rather rigid, and consistently over/under estimate at certain periods of time.
-
-When we say semi-parametric, we mean the exchange of errors between the observations. Let our raw error be
-
-.. math::
-
- \varepsilon_{i} = y_{i} - \hat{y}_{i}
-
-where :math:`\hat{y}_{i}` will be the prediction under :math:`\hat{\boldsymbol{\theta}}` under our model. Then we construct a new set of observations via
-
-.. math::
-
- y_{i}^{\ast} = \hat{y}_{i} + \varepsilon^{\ast}, \quad \varepsilon^{\ast} \sim \mathcal{F}
-
-with :math:`\mathcal{F}` being the empirical distribution of the raw errors. A new set of parameters :math:`\theta^{\ast}` are then found for the bootstrapped samples, and we obtain the :math:`\alpha` confidence interval by taking the :math:`\alpha/2` quantiles. Invoke the correspond python function yields our bootstrap estimates. Unlike :func:`asymptotic`, the bounds here are used when estimating the parameters of each bootstrap samples. An error may be returned if estimation failed for any of the bootstrap samples.
-
-.. ipython::
-
- In [1]: xLBootstrap, xUBootstrap, setX = ci.bootstrap(objSIR, alpha, xhat, iteration=10, lb=boxBoundsArray[:,0], ub=boxBoundsArray[:,1], full_output=True)
-
- In [2]: print(xLBootstrap)
-
- In [3]: print(xUBootstrap)
-
-The additional information here can be used to compute the bias, tail effects and test against the normality assumption. If desired, a simultaneous confidence interval can also be approximated empirically. Note however that because we are using a semi--parameter method here, if the model specification is wrong then the resulting estimates for the bias is also wrong. The confidence interval still has the normal approximation guarantee if number of sample is large.
-
-In this case, because the error in the observation is extremely small, the confidence interval is narrow.
-
-.. ipython::
-
- In [1]: import pylab as P
-
- In [2]: f = plt.figure()
-
- In [3]: n, bins, patches = P.hist(setX[:,0], 50)
-
- In [4]: P.xlabel(r'Estimates of $\beta$');
-
- In [5]: P.ylabel('Frequency');
-
- In [6]: P.title('Estimates under a semi-parametric bootstrap scheme');
-
- @savefig bootstrapCIHist.png
- In [7]: P.show()
-
- In [8]: P.close()
-
-Comparison Between Methods
-==========================
-
-Although we have shown the numerical values for the confidence interval obtained using different method, it may be hard to comprehend how they vary. As they say, a picture says a million word, and given that this particular model only has two parameters, we can obtain inspect and compare the methods visually via a contour plot. The code to perform this is shown below but the code block will not be run to save time and space.
-
-.. ipython ::
- :verbatim:
-
- In [1]: niter = 1000
-
- In [2]: randNum = numpy.random.rand(niter,2)*2.0
-
- In [3]: target = [objSIR.cost(randNum[i,:]) for i in range(niter)]
-
- In [4]: z = numpy.array(target)
-
- In [5]: x = randNum[:,0]
-
- In [6]: y = randNum[:,1]
-
- In [7]: from scipy.interpolate import griddata
-
- In [8]: xi = numpy.linspace(0.0, 2.0, 100)
-
- In [9]: yi = numpy.linspace(0.0, 2.0, 100)
-
- In [10]: zi = griddata((x, y), numpy.log(z), (xi[None,:], yi[:,None]), method='cubic')
-
- In [11]: fig = plt.figure()
-
- In [12]: CS = plt.contour(xi, yi, zi, linewidth=0.5)
-
- In [13]: plt.clabel(CS, fontsize=10, inline=1);
-
- In [14]: l0 = plt.scatter(xhat[0], xhat[1], marker='o', c='k', s=30)
-
- In [15]: l1 = plt.scatter(numpy.append(xL[0], xU[0]), numpy.append(xL[1], xU[1]), marker='x', c='m', s=30)
-
- In [16]: l2 = plt.scatter(numpy.append(xLBootstrap[0], xUBootstrap[0]), numpy.append(xLBootstrap[1], xUBootstrap[1]), marker='x', c='g', s=30)
-
- In [17]: l3 = plt.scatter(numpy.append(xLGeometric[0], xUGeometric[0]), numpy.append(xLGeometric[1], xUGeometric[1]), marker='x', c='r', s=30)
-
- In [19]: plt.legend((l0, l1, l2, l3), ('MLE', 'Asymptotic', 'Bootstrap', 'Geometric'), loc='upper left');
-
- In [20]: plt.ylabel(r'Estimates of $\gamma$');
-
- In [21]: plt.xlabel(r'Estimates of $\beta$');
-
- In [22]: plt.title('Location of the confidence intervals on the likelihood surface');
-
- In [23]: plt.tight_layout();
-
- In [24]: plt.show()
-
- In [25]: plt.close()
-
-In the plot above, the bootstrap confidence interval were so close to the MLE, it is impossible to distinguish the two on such a coarse scale.
-
-Furthermore, because the geometric confidence interval is the result of an integration, we can trace the path that lead to the final output that was shown previously. Again, we are space conscious (and time constrained) so the code block below will not be run.
-
-.. ipython::
- :verbatim:
-
- In [1]: fig = plt.figure()
-
- In [2]: CS = plt.contour(xi, yi, zi, linewidth=0.5)
-
- In [3]: plt.clabel(CS, fontsize=10, inline=1)
-
- In [4]: l1 = plt.scatter(xLList[0][:,0], xLList[0][:,1], marker='o', c='m', s=10);
-
- In [5]: l2 = plt.scatter(xUList[0][:,0], xUList[0][:,1], marker='x', c='m', s=10);
-
- In [6]: plt.legend((l1, l2), ('Lower CI path', 'Upper CI path'), loc='upper left');
-
- In [7]: plt.ylabel(r'Estimates of $\gamma$');
-
- In [8]: plt.xlabel(r'Estimates of $\beta$');
-
- In [9]: plt.title('Integration path of the geometric confidence intervals on the likelihood surface');
-
- In [10]: plt.tight_layout();
-
- In [11]: plt.show()
-
- In [12]: plt.close()
-
-
-Profile Likelihood Surface
-==========================
-
-To investigate why it was hard to find the profile likelihood confidence interval, we can simply look at the surface (which is simply a line as we are profiling). We find solution of :eq:`nuisanceOptim` for each :math:`\boldsymbol{\theta}_{-j}` at various points of :math:`\boldsymbol{\theta}`. Equivalently, we can minimize the original loss function as defined previously, and this is the approach below. We focus out attention to the parameter :math:`\beta` of our SIR model. The results are not shown here but the existence of a solution to :eq:`obj` is evident by simply *eyeballing* the plots.
-
-.. ipython::
- :verbatim:
-
- In [1]: numIter = 100
-
- In [2]: x2 = numpy.linspace(0.0, 2.0, numIter)
-
- In [3]: funcOut = numpy.linspace(0.0, 2.0, numIter)
-
- In [4]: ode.parameters = [('beta',0.5), ('gamma',1.0/3.0)]
-
- In [5]: for i in range(numIter):
- ...: paramEval = [('beta',x2[i]), ('gamma',x2[i])]
- ...: ode2 = copy.deepcopy(ode)
- ...: ode2.parameters = paramEval
- ...: ode2.initial_values = (x0, t[0])
- ...: objSIR2 = NormalLoss(x2[i], ode2, x0, t[0], t[1::], yObv.copy(), targetState, target_param='gamma')
- ...: res = scipy.optimize.minimize(fun=objSIR2.cost,
- ...: jac=objSIR2.gradient,
- ...: x0=x2[i],
- ...: bounds=[(0,2)],
- ...: method='L-BFGS-B')
- ...: funcOut[i] = res['fun']
-
- In [10]: fig = plt.figure()
-
- In [10]: plt.plot(x2, objSIR.cost(xhat) - funcOut)
-
- In [11]: l1 = plt.axhline(-0.5*qchisq(1 - alpha, df=1), 0, 2, color='r')
-
- In [12]: plt.ylabel(r'$\mathcal{L}(\hat{\theta}) - \mathcal{L}(\theta \mid \beta)$');
-
- In [13]: plt.xlabel(r'Fixed value of $\beta$');
-
- In [14]: plt.title('Difference in objective function between MLE\n and the maximization of the nuisance parameters given the\n parameter of interest, beta in this case');
-
- In [15]: plt.tight_layout();
-
- In [16]: plt.legend((l1,), (r'$-0.5\mathcal{X}_{1 - \alpha}^{2}(1)$',), loc='lower right');
-
- @savefig profileLLMaximizerGivenBeta.png
- In [17]: plt.show() # @savefig profileLLMaximizerGivenBeta.png
-
- In [18]: plt.close()
-
-Both the upper and lower confidence interval can be found in the profiling procedure, but the part between of :math:`\beta \in \left[0,\hat{\beta}\right]` is not convex, with :math:`\hat{\beta}` being the MLE. This non--quadratic profile likelihood is due to the non-identifiability of the model given data [Raue2009]_. For this particular case, we can fix it simply by introducing additional observations in the form of the :math:`I` state. We encourage the users to try it out for themselves to confirm.
-
-.. ipython::
- :verbatim:
-
- In [1]: targetState = ['I', 'R']
-
- In [2]: targetStateIndex = numpy.array(ode.get_state_index(targetState))
-
- In [3]: y = solution[1::,targetStateIndex] + numpy.random.normal(0, 0.01, (len(solution[1::,targetStateIndex]), 1))
-
- In [4]: objSIR = NormalLoss(theta, ode, x0, t[0], t[1::], y.copy(), targetState)
-
- In [5]: xhat = objSIR.fit(theta, lb=boxBoundsArray[:,0], ub=boxBoundsArray[:,1])
-
- In [6]: for i in range(numIter):
- ...: paramEval = [('beta', x2[i]), ('gamma', x2[i])]
- ...: ode2 = copy.deepcopy(ode)
- ...: ode2.parameters = paramEval
- ...: ode2.initial_values = (x0, t[0])
- ...: objSIR2 = NormalLoss(x2[i], ode2, x0, t[0], t[1::], y.copy(), targetState, target_param='gamma')
- ...: res = scipy.optimize.minimize(fun=objSIR2.cost,
- ...: jac=objSIR2.gradient,
- ...: x0=x2[i],
- ...: bounds=[(0,2)],
- ...: method='L-BFGS-B')
- ...: funcOut[i] = res['fun']
-
- In [10]: fig = plt.figure()
-
- In [10]: plt.plot(x2, objSIR.cost(xhat) - funcOut);
-
- In [11]: l1 = plt.axhline(-0.5*qchisq(1 - alpha, df=1), 0, 2, color='r')
-
- In [12]: plt.ylabel(r'$\mathcal{L}(\hat{\theta}) - \mathcal{L}(\theta \mid \beta)$');
-
- In [13]: plt.xlabel(r'Fixed value of $\beta$');
-
- In [14]: plt.title('Profile likelihood curve for the parameter of\n interest with more observation');
-
- In [15]: plt.tight_layout();
-
- In [16]: plt.legend((l1,), (r'$-0.5\mathcal{X}_{1 - \alpha}^{2}(1)$',), loc='lower right');
-
- @savefig profileLLMaximizerGivenBetaMoreObs.png
- In [17]: plt.show() # @savefig profileLLMaximizerGivenBetaMoreObs.png
-
- In [18]: plt.close()
diff --git a/doc/doc_to_sort/ref.rst b/doc/doc_to_sort/ref.rst
deleted file mode 100644
index c1f0f93e..00000000
--- a/doc/doc_to_sort/ref.rst
+++ /dev/null
@@ -1,76 +0,0 @@
-.. _ref:
-
-**********
-References
-**********
-
-.. [Aron1984] Seasonality and period-doubling bifurcations in an epidemic model,
- Joan L. Aron and Ira B. Schwartz, Journal of Theoretical Biology, Volume 110,
- Issue 4, page 665-679, 1984
-
-.. [Brauer2008] Mathematical Epidemiology, Lecture Notes in Mathematics,
- Fred Brauer, Springer 2008
-
-.. [Cao2006] Efficient step size selection for the tau-leaping simulation
- method, Yang Cao et el., The Journal of Chemical Physics, Volume 124,
- Issue 4, page 044109, 2006
-
-.. [Finnie2016] EpiJSON: A unified data-format for epidemiology,
- Thomas Finnie et al., Epidemics, Volume 15, page 20-26, 2016
-
-.. [FitzHugh1961] Impulses and Physiological States in Theoretical Models of
- Nerve Membrane, Richard FitzHugh, Biophysical Journal, Volume 1, Issue 6,
- page 445-466, 1961
-
-.. [Gillespie1977] Exact stochastic simulation of coupled chemical reactions,
- Danial T. Gillespie, The Journal of Physical Chemistry, Volume 81,
- Issue 25, page 2340-2361, 1977
-
-.. [Girolami2011] Riemann manifold Langevin and Hamiltonian Monte Carlo methods,
- Mark Girolami and Ben Calderhead, Journal of the Royal Statistical Society
- Series B, Volume 73, Issue 2, page 123-214, 2011.
-
-.. [Hethcote1973] Asymptotic behavior in a deterministic epidemic model,
- Herbert W. Hethcote, Bulletin of Mathematical Biology, Volume 35,
- page 607-614, 1973
-
-.. [Legrand2007] Understanding the dynamics of Ebola epidemics,
- J. Legrand et al. Epidemiology and Infection, Volume 135, Issue 4,
- page 610-621, 2007
-
-.. [Lloyd1996] Spatial Heterogeneity in Epidemic Models, A.L. Lloyd and
- R.M. May, Journal of Theoretical Biology, Volume 179,
- Issue 1, page 1-11, 1996
-
-.. [Lorenz1963] Deterministic Nonperiodic Flow, Edward N. Lorenz, Journal of
- the Atmospheric Sciences, Volume 20, Issue 2, page 130-141, 1963
-
-.. [Lotka1920] Analytical Note on Certain Rhythmic Relations in Organic Systems,
- Alfred J. Lotka, Proceedings of the National Academy of Sciences of the
- United States of America, Volume 7, Issue 7, page 410-415, 1920
-
-.. [Moolgavkar1987] Confidence Regions for Parameters of the Proportional
- Hazards Model: A Simulation Study, S.H. Moolgavkar and D.J. Venzon,
- Scandianvia Journal of Statistics, Volume 14, page 43-56, 1987
-
-.. [Press2007] Numerical Recipes 3rd Edition: The Art of Scientific Computing,
- W.H. Press et al., Cambridge University Press, 2007
-
-.. [Ramsay2007] Parameter estimation for differential equations: a generalized
- smoothing approach, Journal of the Royal Statistical Society Series B,
- James O. Ramsay et al., Volume 69, Issue 5, page 741-796, 2007
-
-.. [Raue2009] Structural and Practical Identifiability Analysis of Partially
- Observed Dynamical Models by Exploiting the Profile Likelihood,
- A. Raue et al., Bioinformatics, Volume 25, Issue 15, page 1923-1929, 2009
-
-.. [Robertson1966] The solution of a set of reaction rate equations,
- H.H. Robertson, Academic Press, page 178-182, 1966
-
-.. [vanderpol1926] On Relaxed Oscillations, Balthasar van der Pol, The London,
- Edinburgh, and Dublin Philosophical Magazine and Journal of Science,
- Volume 2, Issue 11, page 978-992, 1926
-
-.. [Venzon1988] A Method for Computing Profile-Likelihood-Based Confidence
- Intervals, D.J. Venzon and S.H. Moolgavkar, Journal of the Royal Statistical
- Society Series C (Applied Statistics), Volume 37, Issue 1, page 87-94, 1988
diff --git a/doc/doc_to_sort/stochastic.rst b/doc/doc_to_sort/stochastic.rst
deleted file mode 100644
index 98f380b9..00000000
--- a/doc/doc_to_sort/stochastic.rst
+++ /dev/null
@@ -1,269 +0,0 @@
-.. _stochastic:
-
-********************************
-Stochastic representation of ode
-********************************
-
-There are multiple interpretation of stochasticity of a deterministic ode. We have implemented two of the most common interpretation; when the parameters are realizations of some underlying distribution, and when we have a so called chemical master equation where each transition represent a jump. Again, we use the standard SIR example as previously seen in ref:`sir`.
-
-.. ipython::
-
- In [1]: from pygom import SimulateOde, Transition, TransitionType
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: import numpy as np
-
- In [1]: x0 = [1, 1.27e-6, 0]
-
- In [1]: t = np.linspace(0, 150, 100)
-
- In [1]: stateList = ['S', 'I', 'R']
-
- In [1]: paramList = ['beta', 'gamma']
-
- In [1]: transitionList = [
- ...: Transition(origin='S', destination='I', equation='beta*S*I', transition_type=TransitionType.T),
- ...: Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)
- ...: ]
-
- In [1]: odeS = SimulateOde(stateList, paramList, transition=transitionList)
-
- In [1]: odeS.parameters = [0.5, 1.0/3.0]
-
- In [1]: odeS.initial_values = (x0, t[0])
-
- In [1]: solutionReference = odeS.integrate(t[1::], full_output=False)
-
-
-Stochastic Parameter
-====================
-
-In our first scenario, we assume that the parameters follow some underlying distribution. Given that both :math:`\beta` and :math:`\gamma` in our SIR model has to be non-negative, it seemed natural to use a Gamma distribution. We make use of the familiar syntax from `R `_ to define our distribution. Unfortunately, we have to define it via a tuple, where the first is the function handle (name) while the second the parameters. Note that the parameters can be defined as either a dictionary or as the same sequence as `R `_, which is the shape then the rate in the Gamma case.
-
-.. ipython::
-
- In [1]: from pygom.utilR import rgamma
-
- In [1]: d = dict()
-
- In [1]: d['beta'] = (rgamma,{'shape':100.0, 'rate':200.0})
-
- In [1]: d['gamma'] = (rgamma,(100.0, 300.0))
-
- In [1]: odeS.parameters = d
-
- In [1]: Ymean, Yall = odeS.simulate_param(t[1::], 10, full_output=True)
-
-Note that a message is printed above where it is trying to connect to an mpi backend, as our module has the capability to compute in parallel using the IPython. We have simulated a total of 10 different solutions using different parameters, the plots can be seen below
-
-.. ipython::
-
- In [1]: f, axarr = plt.subplots(1,3)
-
- In [1]: for solution in Yall:
- ...: axarr[0].plot(t, solution[:,0])
- ...: axarr[1].plot(t, solution[:,1])
- ...: axarr[2].plot(t, solution[:,2])
-
- @savefig stochastic_param_all.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-We then see how the expected results, using the sample average of the simulations
-
-.. math::
-
- \tilde{x}(T) = \mathbb{E}\left[ \int_{t_{0}}^{T} f(\theta,x,t) dt \right]
-
-differs from the reference solution
-
-.. math::
-
- \hat{x}(T) = \int_{t_{0}}^{T} f(\mathbb{E}\left[ \theta \right],x,t) dt
-
-.. ipython::
-
- In [1]: f, axarr = plt.subplots(1,3)
-
- In [1]: for i in range(3): axarr[i].plot(t, Ymean[:,i] - solutionReference[:,i])
-
- @savefig stochastic_param_compare.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-The difference is relatively large especially for the :math:`S` state. We can decrease this difference as we increase the number of simulation, and more sophisticated sampling method for the generation of random variables can also decrease the difference.
-
-In addition to using the built-in functions to represent stochasticity, we can also use standard frozen distributions from scipy. Note that it must be a frozen distribution as that is the only for the parameters of the distributions to propagate through the model.
-
-.. ipython::
-
- In [1]: import scipy.stats as st
-
- In [1]: d = dict()
-
- In [1]: d['beta'] = st.gamma(a=100.0, scale=1.0/200.0)
-
- In [1]: d['gamma'] = st.gamma(a=100.0, scale=1.0/300.0)
-
- In [1]: odeS.parameters = d
-
-
-Obviously, there may be scenarios where only some of the parameters are stochastic. Let's say that the :math:`\gamma` parameter is fixed at :math:`1/3`, then simply replace the distribution information with a scalar. A quick visual inspection at the resulting plot suggests that the system of ODE potentially has less variation when compared to the case where both parameters are stochastic.
-
-.. ipython::
-
- In [1]: d['gamma'] = 1.0/3.0
-
- In [1]: odeS.parameters = d
-
- In [1]: YmeanSingle, YallSingle = odeS.simulate_param(t[1::], 5, full_output=True)
-
- In [1]: f, axarr = plt.subplots(1,3)
-
- In [1]: for solution in YallSingle:
- ...: axarr[0].plot(t,solution[:,0])
- ...: axarr[1].plot(t,solution[:,1])
- ...: axarr[2].plot(t,solution[:,2])
-
- @savefig stochastic_param_single.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-Continuous Markov Representation
-================================
-
-Another common method of introducing stochasticity into a set of ode is by assuming each movement in the system is a result of a jump process. More concretely, the probabilty of a move for transition :math:`j` is governed by an exponential distribution such that
-
-.. math::
-
- \Pr(\text{process $j$ jump within time } \tau) = \lambda_{j} e^{-\lambda_{j} \tau},
-
-where :math:`\lambda_{j}` is the rate of transition for process :math:`j` and :math:`\tau` the time elapsed after current time :math:`t`.
-
-A couple of the commmon implementation for the jump process have been implemented where two of them are used during a normal simulation; the first reaction method [Gillespie1977]_ and the :math:`\tau`-Leap method [Cao2006]_. The two changes interactively depending on the size of the states.
-
-.. ipython::
-
- In [1]: x0 = [2362206.0, 3.0, 0.0]
-
- In [1]: stateList = ['S', 'I', 'R']
-
- In [1]: paramList = ['beta', 'gamma', 'N']
-
- In [1]: transitionList = [
- ...: Transition(origin='S', destination='I', equation='beta*S*I/N', transition_type=TransitionType.T),
- ...: Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)
- ...: ]
-
- In [1]: odeS = SimulateOde(stateList, paramList, transition=transitionList)
-
- In [1]: odeS.parameters = [0.5, 1.0/3.0, x0[0]]
-
- In [1]: odeS.initial_values = (x0, t[0])
-
- In [1]: solutionReference = odeS.integrate(t[1::])
-
- In [1]: simX, simT = odeS.simulate_jump(t[1:10], 10, full_output=True)
-
- In [1]: f, axarr = plt.subplots(1, 3)
-
- In [1]: for solution in simX:
- ...: axarr[0].plot(t[:9], solution[:,0])
- ...: axarr[1].plot(t[:9], solution[:,1])
- ...: axarr[2].plot(t[:9], solution[:,2])
-
- @savefig stochastic_process.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-Above, we see ten different simulation, again using the SIR model but without standardization of the initial conditions. We restrict our time frame to be only the first 10 time points so that the individual changes can be seen more clearly above. If we use the same time frame as the one used previously for the deterministic system (as shown below), the trajectories are smoothed out and we no longer observe the *jumps*. Looking at the raw trajectories of the ODE below, it is obvious that the mean from a jump process is very different to the deterministic solution. The reason behind this is that the jump process above was able to fully remove all the initial infected individuals before any new ones.
-
-.. ipython::
-
- In [1]: simX,simT = odeS.simulate_jump(t, 5, full_output=True)
-
- In [1]: simMean = np.mean(simX, axis=0)
-
- In [1]: f, axarr = plt.subplots(1,3)
-
- In [1]: for solution in simX:
- ...: axarr[0].plot(t, solution[:,0])
- ...: axarr[1].plot(t, solution[:,1])
- ...: axarr[2].plot(t, solution[:,2])
-
- @savefig stochastic_process_compare_large_n_curves.png
- In [1]: plt.show()
-
- In [1]: plt.close()
-
-
-Repeatable Simulation
-=====================
-
-One of the possible use of compartmental models is to generate forecasts. Although most of the time the requirement would be to have (at least point-wise) convergence in the limit, reproducibility is also important. For both types of interpretation explained above, we have given the package the capability to repeat the simulations by setting a seed. When the assumption is that the parameters follows some sort of distribution, we simply set the seed which governs the global state of the random number generator.
-
-.. ipython::
-
- In [1]: x0 = [2362206.0, 3.0, 0.0]
-
- In [1]: odeS = SimulateOde(stateList, paramList, transition=transitionList)
-
- In [1]: d = {'beta': st.gamma(a=100.0, scale=1.0/200.0), 'gamma': st.gamma(a=100.0, scale=1.0/300.0), 'N': x0[0]}
-
- In [1]: odeS.parameters = d
-
- In [1]: odeS.initial_values = (x0, t[0])
-
- In [1]: Ymean, Yall = odeS.simulate_param(t[1::], 10, full_output=True)
-
- In [1]: np.random.seed(1)
-
- In [1]: Ymean1, Yall1 = odeS.simulate_param(t[1::], 10, full_output=True)
-
- In [1]: np.random.seed(1)
-
- In [1]: Ymean2, Yall2 = odeS.simulate_param(t[1::], 10, full_output=True)
-
- In [1]: sim_diff = [np.linalg.norm(Yall[i] - yi) for i, yi in enumerate(Yall1)]
-
- In [1]: sim_diff12 = [np.linalg.norm(Yall2[i] - yi) for i, yi in enumerate(Yall1)]
-
- In [1]: print("Different in the simulations and the mean: (%s, %s) " % (np.sum(sim_diff), np.sum(np.abs(Ymean1 - Ymean))))
-
- In [1]: print("Different in the simulations and the mean using same seed: (%s, %s) " % (np.sum(sim_diff12), np.sum(np.abs(Ymean2 - Ymean1))))
-
-In the alternative interpretation, setting the global seed is insufficient. Unlike simulation based on the parameters, where we can pre-generate all the parameter values and send them off to individual processes in the parallel backend, this is prohibitive here. In a nutshell, the seed does not propagate when using a parallel backend because each *integration* requires an unknown number of random samples. Therefore, we have an additional flag **parallel** in the function signature. By ensuring that the computation runs in serial, we can make use of the global seed and generate identical runs.
-
-.. ipython::
-
- In [1]: x0 = [2362206.0, 3.0, 0.0]
-
- In [1]: odeS = SimulateOde(stateList, paramList, transition=transitionList)
-
- In [1]: odeS.parameters = [0.5, 1.0/3.0, x0[0]]
-
- In [1]: odeS.initial_values = (x0, t[0])
-
- In [1]: simX, simT = odeS.simulate_jump(t[1:10], 10, parallel=False, full_output=True)
-
- In [1]: np.random.seed(1)
-
- In [1]: simX1, simT1 = odeS.simulate_jump(t[1:10], 10, parallel=False, full_output=True)
-
- In [1]: np.random.seed(1)
-
- In [1]: simX2, simT2 = odeS.simulate_jump(t[1:10], 10, parallel=False, full_output=True)
-
- In [1]: sim_diff = [np.linalg.norm(simX[i] - x1) for i, x1 in enumerate(simX1)]
-
- In [1]: sim_diff12 = [np.linalg.norm(simX2[i] - x1) for i, x1 in enumerate(simX1)]
-
- In [1]: print("Difference in simulation: %s" % np.sum(np.abs(sim_diff)))
-
- In [1]: print("Difference in simulation using same seed: %s" % np.sum(np.abs(sim_diff12)))
-
diff --git a/doc/doc_to_sort/transition.rst b/doc/doc_to_sort/transition.rst
deleted file mode 100644
index a36959db..00000000
--- a/doc/doc_to_sort/transition.rst
+++ /dev/null
@@ -1,183 +0,0 @@
-.. _transition:
-
-*****************
-Transition Object
-*****************
-
-The most important part of setting up the model is to correctly define the set odes, which is based solely on the classes defined in :mod:`transition`. All transitions that gets fed into the ode system needs to be defined as a transition object, :class:`Transition`. It takes a total of four input arguments
-
-#. The origin state
-#. Equation that describe the process
-#. The type of transition
-#. The destination state
-
-where the first three are mandatory. To demonstrate, we go back to the SIR model defined previously in the section :ref:`sir`. Recall that the set of odes are
-
-.. math::
-
- \frac{\partial S}{\partial t} &= -\beta SI \\
- \frac{\partial I}{\partial t} &= \beta SI - \gamma I \\
- \frac{\partial R}{\partial t} &= \gamma I.
-
-We can simply define the set of ode, as seen previously, via
-
-.. ipython::
-
- In [1]: from pygom import Transition, TransitionType, common_models
-
- In [2]: ode1 = Transition(origin='S', equation='-beta*S*I', transition_type=TransitionType.ODE)
-
- In [3]: ode2 = Transition(origin='I', equation='beta*S*I - gamma*I', transition_type=TransitionType.ODE)
-
- In [4]: ode3 = Transition(origin='R', equation='gamma*I', transition_type=TransitionType.ODE)
-
-Note that we need to state explicitly the type of equation we are inputting, which is simply of type **ODE** in this case. We can confirm this has been entered correctly by putting it into :class:`DeterministicOde`
-
-.. ipython::
-
- In [1]: from pygom import DeterministicOde
-
- In [2]: stateList = ['S', 'I', 'R']
-
- In [3]: paramList = ['beta', 'gamma']
-
- In [4]: model = DeterministicOde(stateList,
- ...: paramList,
- ...: ode=[ode1, ode2, ode3])
-
-and check it
-
-.. ipython::
-
- In [1]: model.get_ode_eqn()
-
-An alternative print function :func:`print_ode` is also available which may be more suitable in other situation. The default prints the formula in a rendered format and another which prints out the latex format which can be used directly in a latex document. The latter is useful as it saves typing out the formula twice, once in the code and another in documents.
-
-.. ipython::
-
- In [1]: model.print_ode(False)
-
- In [2]: model.print_ode(True)
-
-Now we are going to show the different ways of defining the same set of odes.
-
-.. _defining-eqn:
-
-Defining the equations
-======================
-
-Recognizing that the set of odes defining the SIR model is the result of two transitions,
-
-.. math::
-
- S \rightarrow I &= \beta SI \\
- I \rightarrow R &= \gamma I
-
-where :math:`S \rightarrow I` denotes a transition from state :math:`S` to state :math:`I`. Therefore, we can simply define our model by these two transition, but now these two transition needs to be inputted via the ``transition`` argument instead of the ``ode`` argument. Note that we are initializing the model using a different class, because the stochastic implementation has more operation on transitions.
-
-.. ipython::
-
- In [600]: from pygom import SimulateOde
-
- In [601]: t1 = Transition(origin='S', destination='I', equation='beta*S*I', transition_type=TransitionType.T)
-
- In [602]: t2 = Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)
-
- In [603]: modelTrans = SimulateOde(stateList,
- .....: paramList,
- .....: transition=[t1, t2])
-
- In [604]: modelTrans.get_ode_eqn()
-
-We can see that the resulting ode is exactly the same, as expected. The transition matrix that defines this process can easily be visualized using graphviz. Because only certain renderer permit the use of sub and superscript, operators such as :math:`**` are left as they are in the equation.
-
-.. ipython::
-
- In [1]: import matplotlib.pyplot as plt
-
- In [2]: f = plt.figure()
-
- In [3]: modelTrans.get_transition_matrix()
-
- @savefig sir_transition_graph.png
- In [4]: dot = modelTrans.get_transition_graph()
-
-If we put in via the wrong argument like below (not run), then an error will appear.
-
-.. ipython::
-
- In [1]: # modelTrans = DeterministicOde(stateList, paramList, ode=[t1, t2])
-
-because :class:`TranstionType` was defined explicitly as a transition instead of an ode. The same can be observed when the wrong :class:`TransitionType` is used for any of the input argument.
-
-This though, only encourages us to define the transitions carefully. We can also pretend that the set of odes are in fact just a set of birth process
-
-.. ipython::
-
- In [619]: birth1 = Transition(origin='S', equation='-beta*S*I', transition_type=TransitionType.B)
-
- In [620]: birth2 = Transition(origin='I', equation='beta*S*I - gamma*I', transition_type=TransitionType.B)
-
- In [621]: birth3 = Transition(origin='R', equation='gamma*I', transition_type=TransitionType.B)
-
- In [622]: modelBirth = DeterministicOde(stateList,
- .....: paramList,
- .....: birth_death=[birth1, birth2, birth3])
-
- In [623]: modelBirth.get_ode_eqn()
-
-which will yield the same set result. Alternatively, we can use the negative of the equation but set it to be a death process. For example, we multiply the equations for state :math:`S` and :math:`R` with a negative sign and set the transition type to be a death process instead.
-
-.. ipython::
-
- In [624]: death1 = Transition(origin='S', equation='beta*S*I', transition_type=TransitionType.D)
-
- In [625]: birth2 = Transition(origin='I', equation='beta*S*I - gamma*I', transition_type=TransitionType.B)
-
- In [626]: death3 = Transition(origin='R', equation='-gamma*I', transition_type=TransitionType.D)
-
- In [627]: modelBD = DeterministicOde(stateList,
- .....: paramList,
- .....: birth_death=[death1, birth2, death3])
-
- In [628]: modelBD.get_ode_eqn()
-
-
-We can see that all the above ways yield the same set of ode at the end.
-
-Model Addition
-==============
-
-Because we allow the separation of transitions between states and birth/death processes, the birth/death processes can be added later on.
-
-.. ipython::
-
- In [1]: modelBD2 = modelTrans
-
- In [1]: modelBD2.param_list = paramList + ['mu', 'B']
-
- In [1]: birthDeathList = [Transition(origin='S', equation='B', transition_type=TransitionType.B),
- ...: Transition(origin='S', equation='mu*S', transition_type=TransitionType.D),
- ...: Transition(origin='I', equation='mu*I', transition_type=TransitionType.D)]
-
- In [1]: modelBD2.birth_death_list = birthDeathList
-
- In [1]: modelBD2.get_ode_eqn()
-
-So modeling can be done in stages. Start with a standard closed system and extend it with additional flows that interact with the environment.
-
-.. _transition-type:
-
-Transition type
-===============
-
-There are currently four different type of transitions allowed, which is defined in an enum class also located in :mod:`transition`. The four types are B, D, ODE and T, where they represent different type of process with explanation in their corresponding value.
-
-.. ipython::
-
- In [1]: from pygom import transition
-
- In [2]: for i in transition.TransitionType:
- ...: print(str(i) + " = " + i.value)
-
-Each birth process are added to the origin state while each death process are deducted from the state, i.e. added to the state after multiplying with a negative sign. An ode type is also added to the state and we forbid the number of input ode to be greater than the number of state inputted.
diff --git a/doc/doc_to_sort/unroll/unrollBD.rst b/doc/doc_to_sort/unroll/unrollBD.rst
deleted file mode 100644
index b8be3a88..00000000
--- a/doc/doc_to_sort/unroll/unrollBD.rst
+++ /dev/null
@@ -1,66 +0,0 @@
-.. _unrollBD:
-
-ODE With Birth and Death Process
-================================
-
-We follow on from the SIR model of :ref:`unrollSimple` but with additional birth and death processes.
-
-.. math::
-
- \frac{dS}{dt} &= -\beta SI + B - \mu S\\
- \frac{dI}{dt} &= \beta SI - \gamma I - \mu I\\
- \frac{dR}{dt} &= \gamma I.
-
-which consists of two transitions and three birth and death process
-
-.. graphviz::
-
- digraph SIR_Model {
- rankdir=LR;
- size="8"
- node [shape = circle];
- S -> I [ label = "βSI" ];
- I -> R [ label = "γI" ];
- B [height=0 margin=0 shape=plaintext width=0];
- B -> S;
- "S**2*μ" [height=0 margin=0 shape=plaintext width=0];
- S -> "S**2*μ";
- "I*μ" [height=0 margin=0 shape=plaintext width=0];
- I -> "I*μ";
- }
-
-Let's define this in terms of ODEs, and unroll it back to the individual processes.
-
-.. ipython::
-
- In [1]: from pygom import Transition, TransitionType, SimulateOde, common_models
-
- In [1]: import matplotlib.pyplot as plt
-
- In [1]: stateList = ['S', 'I', 'R']
-
- In [1]: paramList = ['beta', 'gamma', 'B', 'mu']
-
- In [1]: odeList = [
- ...: Transition(origin='S',
- ...: equation='-beta*S*I + B - mu*S',
- ...: transition_type=TransitionType.ODE),
- ...: Transition(origin='I',
- ...: equation='beta*S*I - gamma*I - mu*I',
- ...: transition_type=TransitionType.ODE),
- ...: Transition(origin='R',
- ...: equation='gamma*I',
- ...: transition_type=TransitionType.ODE)
- ...: ]
-
- In [1]: ode = SimulateOde(stateList, paramList, ode=odeList)
-
- In [1]: ode2 = ode.get_unrolled_obj()
-
- In [1]: f = plt.figure()
-
- @savefig sir_unrolled_transition_graph.png
- In [1]: ode2.get_transition_graph()
-
- In [1]: plt.close()
-
diff --git a/doc/doc_to_sort/unroll/unrollHard.rst b/doc/doc_to_sort/unroll/unrollHard.rst
deleted file mode 100644
index 2361cf4d..00000000
--- a/doc/doc_to_sort/unroll/unrollHard.rst
+++ /dev/null
@@ -1,73 +0,0 @@
-.. _unrollHard:
-
-Hard Problem
-============
-
-Now we turn to a harder problem that does not have a one to one mapping between all the transitions and the terms in the ODEs. We use the model in :func:`Influenza_SLIARN`, defined by
-
-.. math::
- \frac{dS}{dt} &= -S \beta (I + \delta A) \\
- \frac{dL}{dt} &= S \beta (I + \delta A) - \kappa L \\
- \frac{dI}{dt} &= p \kappa L - \alpha I \\
- \frac{dA}{dt} &= (1 - p) \kappa L - \eta A \\
- \frac{dR}{dt} &= f \alpha I + \eta A \\
- \frac{dN}{dt} &= -(1 - f) \alpha I.
-
-The outflow of state **L**, :math:`\kappa L`, is composed of two transitions, one to **I** and the other to **A** but the ode of **L** only reflects the total flow going out of the state. Same can be said for state **I**, where the flow :math:`\alpha I` goes to both **R** and **N**. Graphically, it is a rather simple process as shown below.
-
-.. graphviz::
-
- digraph SLIARD_Model {
- labelloc = "t";
- label = "Original transitions";
- rankdir=LR;
- size="8"
- node [shape = circle];
- S -> L [ label = "-Sβ(I + δA)/N" ];
- L -> I [ label = "κLp" ];
- L -> A [ label = "κL(1-p)" ];
- I -> R [ label = "αIf" ];
- I -> D [ label = "αI(1-f)" ];
- A -> R [ label = "ηA" ];
- }
-
-We slightly change the model by introducing a new state **D** to convert it into a closed system. The combination of state **D** and **N** is a constant, the total population. So we can remove **N** and this new system consist of six transitions. We define them explicitly as ODEs and unroll them into transitions.
-
-.. ipython::
-
- In [1]: from pygom import SimulateOde, Transition, TransitionType
-
- In [1]: stateList = ['S', 'L', 'I', 'A', 'R', 'D']
-
- In [2]: paramList = ['beta', 'p', 'kappa', 'alpha', 'f', 'delta', 'epsilon', 'N']
-
- In [3]: odeList = [
- ...: Transition(origin='S', equation='- beta*S/N*(I + delta*A)', transition_type=TransitionType.ODE),
- ...: Transition(origin='L', equation='beta*S/N*(I + delta*A) - kappa*L', transition_type=TransitionType.ODE),
- ...: Transition(origin='I', equation='p*kappa*L - alpha*I', transition_type=TransitionType.ODE),
- ...: Transition(origin='A', equation='(1 - p)*kappa * L - epsilon*A', transition_type=TransitionType.ODE),
- ...: Transition(origin='R', equation='f*alpha*I + epsilon*A', transition_type=TransitionType.ODE),
- ...: Transition(origin='D', equation='(1 - f)*alpha*I', transition_type=TransitionType.ODE) ]
-
- In [4]: ode = SimulateOde(stateList, paramList, ode=odeList)
-
- In [5]: ode.get_transition_matrix()
-
- In [6]: ode2 = ode.get_unrolled_obj()
-
- In [7]: ode2.get_transition_matrix()
-
- In [8]: ode2.get_ode_eqn()
-
-After unrolling the odes, we have the following transition graph
-
-.. ipython::
-
- @savefig sir_unrolled_transition_graph_hard.png
- In [1]: ode2.get_transition_graph()
-
- In [2]: plt.close()
-
- In [3]: print(sum(ode.get_ode_eqn() - ode2.get_ode_eqn()).simplify()) # difference
-
-which is exactly the same apart from slightly weird arrangement of symbols in some of the equations. The last line with a value of zero also reaffirms the result.
diff --git a/doc/doc_to_sort/unroll/unrollSimple.rst b/doc/doc_to_sort/unroll/unrollSimple.rst
deleted file mode 100644
index 21d3948a..00000000
--- a/doc/doc_to_sort/unroll/unrollSimple.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-.. _unrollSimple:
-
-Simple Problem
-==============
-
-For a simple problem, we consider the SIR model defined by
-
-.. math::
-
- \frac{dS}{dt} &= -\beta SI \\
- \frac{dI}{dt} &= \beta SI - \gamma I \\
- \frac{dR}{dt} &= \gamma I.
-
-which consists of two transitions
-
-.. graphviz::
-
- digraph SIR_Model {
- rankdir=LR;
- size="8"
- node [shape = circle];
- S -> I [ label = "βSI" ];
- I -> R [ label = "γI" ];
- }
-
-Let's define this using the code block below
-
-.. ipython::
-
- In [1]: from pygom import SimulateOde, Transition, TransitionType
-
- In [2]: ode1 = Transition(origin='S', equation='-beta*S*I', transition_type=TransitionType.ODE)
-
- In [3]: ode2 = Transition(origin='I', equation='beta*S*I - gamma*I', transition_type=TransitionType.ODE)
-
- In [4]: ode3 = Transition(origin='R', equation='gamma*I', transition_type=TransitionType.ODE)
-
- In [6]: stateList = ['S', 'I', 'R']
-
- In [7]: paramList = ['beta', 'gamma']
-
- In [8]: ode = SimulateOde(stateList,
- ...: paramList,
- ...: ode=[ode1, ode2, ode3])
-
- In [9]: ode.get_transition_matrix()
-
-and the last line shows that the transition matrix is empty. This is the expected result because :class:`SimulateOdeModel` was not initialized using transitions. We populate the transition matrix below and demonstrate the difference.
-
-.. ipython::
-
- In [1]: ode = ode.get_unrolled_obj()
-
- In [2]: ode.get_transition_matrix()
diff --git a/doc/doc_to_sort/unrollOde.rst b/doc/doc_to_sort/unrollOde.rst
deleted file mode 100644
index 09b44cef..00000000
--- a/doc/doc_to_sort/unrollOde.rst
+++ /dev/null
@@ -1,13 +0,0 @@
-.. _unrollOde:
-
-****************************
-Convert ODE into transitions
-****************************
-
-As seen previously in :ref:`transition`, we can define the model via the transitions or explicitly as ODEs. There are times when we all just want to test out some model in a paper and the only available information are the ODEs themselves. Even though we know that the ODEs come from some underlying transitions, breaking them down can be a time consuming process. We provide the functionalities to do this automatically.
-
-.. toctree::
-
- unroll/unrollSimple.rst
- unroll/unrollBD.rst
- unroll/unrollHard.rst
diff --git a/doc/make.bat b/doc/make.bat
deleted file mode 100644
index 597a35c2..00000000
--- a/doc/make.bat
+++ /dev/null
@@ -1,113 +0,0 @@
-@ECHO OFF
-
-REM Command file for Sphinx documentation
-set SPHINXBUILD=sphinx-build
-set BUILDDIR=_build
-set SPHINXOPTS=
-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
-if NOT "%PAPER%" == "" (
- set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
-)
-
-if "%1" == "" goto help
-
-if "%1" == "help" (
- :help
- echo.Please use `make ^` where ^ is one of
- echo. html to make standalone HTML files
- echo. dirhtml to make HTML files named index.html in directories
- echo. pickle to make pickle files
- echo. json to make JSON files
- echo. htmlhelp to make HTML files and a HTML help project
- echo. qthelp to make HTML files and a qthelp project
- echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
- echo. changes to make an overview over all changed/added/deprecated items
- echo. linkcheck to check all external links for integrity
- echo. doctest to run all doctests embedded in the documentation if enabled
- goto end
-)
-
-if "%1" == "clean" (
- for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
- del /q /s %BUILDDIR%\*
- goto end
-)
-
-if "%1" == "html" (
- %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
- echo.
- echo.Build finished. The HTML pages are in %BUILDDIR%/html.
- goto end
-)
-
-if "%1" == "dirhtml" (
- %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
- echo.
- echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
- goto end
-)
-
-if "%1" == "pickle" (
- %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
- echo.
- echo.Build finished; now you can process the pickle files.
- goto end
-)
-
-if "%1" == "json" (
- %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
- echo.
- echo.Build finished; now you can process the JSON files.
- goto end
-)
-
-if "%1" == "htmlhelp" (
- %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
- echo.
- echo.Build finished; now you can run HTML Help Workshop with the ^
-.hhp project file in %BUILDDIR%/htmlhelp.
- goto end
-)
-
-if "%1" == "qthelp" (
- %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
- echo.
- echo.Build finished; now you can run "qcollectiongenerator" with the ^
-.qhcp project file in %BUILDDIR%/qthelp, like this:
- echo.^> qcollectiongenerator %BUILDDIR%\qthelp\pyGenericOdeModelDoc.qhcp
- echo.To view the help file:
- echo.^> assistant -collectionFile %BUILDDIR%\qthelp\pyGenericOdeModelDoc.ghc
- goto end
-)
-
-if "%1" == "latex" (
- %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
- echo.
- echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
- goto end
-)
-
-if "%1" == "changes" (
- %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
- echo.
- echo.The overview file is in %BUILDDIR%/changes.
- goto end
-)
-
-if "%1" == "linkcheck" (
- %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
- echo.
- echo.Link check complete; look for any errors in the above output ^
-or in %BUILDDIR%/linkcheck/output.txt.
- goto end
-)
-
-if "%1" == "doctest" (
- %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
- echo.
- echo.Testing of doctests in the sources finished, look at the ^
-results in %BUILDDIR%/doctest/output.txt.
- goto end
-)
-
-:end
diff --git a/doc/requirements.txt b/doc/requirements.txt
deleted file mode 100644
index d6e1e29a..00000000
--- a/doc/requirements.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-dask[complete]>=0.13.0
-graphviz>=0.4.9
-matplotlib>=1.0.0
-numpy>=1.6.0
-pandas>=0.15.0
-python-dateutil>=2.0.0
-scipy>=0.10.0
-sympy>=1.0.0
-numpydoc>=0.6.0
-#sphinx>=1.4.1
-#sphinx_rtd_theme>=0.2.0
-#ipython>=7.1.1
-jupyter-book
diff --git a/doc/source/_static/.gitignore b/doc/source/_static/.gitignore
deleted file mode 100644
index e69de29b..00000000
diff --git a/doc/source/conf.py b/doc/source/conf.py
deleted file mode 100644
index dc804650..00000000
--- a/doc/source/conf.py
+++ /dev/null
@@ -1,264 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# This file is execfile() with the current directory set to its containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import sys
-import os
-import warnings
-
-#slight hack for graphvis on windows to ensure conda path is correct
-#if sys.platform == 'win32':
-# os.environ['PATH'] += os.pathsep + os.environ['CONDA_PREFIX'] + r'\Library\bin\graphviz'
-
-import sphinx
-if sphinx.__version__ < '1.4.1':
- raise RuntimeError("Sphinx 1.4.1 or newer required")
-
-import pygom
-
-needs_sphinx = '1.4.1'
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.append(os.path.abspath('.'))
-sys.path.append(os.path.abspath('sphinxext'))
-#sys.path.append(os.path.abspath('../pygom'))
-
-# -- General configuration -----------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be extensions
-# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [
- 'sphinx.ext.mathjax', # 'sphinx.ext.imgmath',
- 'sphinx.ext.autodoc',
- 'sphinx.ext.autosummary',
- 'sphinx.ext.doctest',
- 'sphinx.ext.intersphinx',
- 'sphinx.ext.graphviz',# 'matplotlib.sphinxext.only_directives',
- 'matplotlib.sphinxext.plot_directive',
- 'numpydoc',
- 'IPython.sphinxext.ipython_console_highlighting',
- 'IPython.sphinxext.ipython_directive',
- 'nbsphinx'
- ]
-
-# the mapping for code in other packages
-intersphinx_mapping = {'matplotlib': ('http://matplotlib.org/', None),
- 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
- 'python': ('https://docs.python.org/2', None),
- 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
- 'sympy': ('http://docs.sympy.org/latest/', None)}
-
-numpydoc_show_class_members = False
-
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-#source_encoding = 'utf-8'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-project = 'PyGOM Documentation'
-copyright = '2015-2019, Public Health England'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = pygom.__version__
-# The full version, including alpha/beta/rc tags.
-release = pygom.__version__
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of documents that shouldn't be included in the build.
-#unused_docs = []
-
-# List of directories, relative to source directory, that shouldn't be searched
-# for source files.
-exclude_trees = ['_build']
-
-# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
-
-#Set the directory to save figures in
-ipython_savefig_dir = 'savefig'
-# -- Options for HTML output ---------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. Major themes that come with
-# Sphinx are currently 'default' and 'sphinxdoc'.
-#['alabaster',sphinx_rtd_theme','classic','sphinxdoc','scrolls','agogo',
-# 'traditional','nature','haiku','pyramid','bizstyle']
-on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
-if not on_rtd: # only import and set the theme if we're building docs locally
- import sphinx_rtd_theme
- html_theme = 'sphinx_rtd_theme'
- html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
- exclude_patterns = ['doc_to_sort.*']
-else:
- # RTD will time out if we try to build the whole of the documentation so
- # ignore some of the longer bits and perhaps add them later
- # // TODO: speed up runtime for longer examples for readthedocs
- exclude_patterns = ['common_models/*.rst',
-# 'bvpSimple.rst',
- 'epi.rst',
-# 'estimate1.rst',
- 'estimate2.rst',
- 'gradient.rst',
- 'epijson.rst',
- 'fh.rst',
-# 'getting_started.rst',
- 'initialGuess.rst',
- 'profile.rst',
- 'sir.rst',
-# 'stochastic.rst',
-# 'transition.rst'
- ]
-
-# html_theme = 'sphinx_rtd_theme'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-#html_theme_options = {}
-
-# Add any paths that contain custom themes here, relative to this directory.
-#html_theme_path = []
-
-# The name for this set of Sphinx documents. If None, it defaults to
-# " v documentation".
-#html_title = None
-
-# A shorter title for the navigation bar. Default is the same as html_title.
-#html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-#html_logo = None
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-#html_favicon = None
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-#html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-#html_additional_pages = {}
-
-# If false, no module index is generated.
-#html_use_modindex = True
-
-# If false, no index is generated.
-#html_use_index = True
-
-# If true, the index is split into individual pages for each letter.
-#html_split_index = False
-
-# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a tag referring to it. The value of this option must be the
-# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
-
-# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = ''
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'PyGOM Documentation'
-
-html_add_permalinks = ''
-
-# -- Options for LaTeX output --------------------------------------------------
-
-# The paper size ('letter' or 'a4').
-#latex_paper_size = 'letter'
-
-# The font size ('10pt', '11pt' or '12pt').
-#latex_font_size = '10pt'
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title, author, documentclass [howto/manual]).
-latex_documents = [
- ('index', 'PyGOM.tex', 'PyGOM Documentation',
- 'Edwin Tye', 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-#latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-#latex_use_parts = False
-
-# Additional stuff for the LaTeX preamble.
-latex_preamble = '\\usepackage{amsmath,amssymb}'
-
-# Documents to append as an appendix to all manuals.
-#latex_appendices = []
-
-# If false, no module index is generated.
-#latex_use_modindex = True
-
-# latex_encodings = 'utf-8'
-warnings.filterwarnings("ignore", category=UserWarning,
- message='Matplotlib is currently using agg, which is a'
- ' non-GUI backend, so cannot show the figure.')
diff --git a/doc/source/getting_started.rst b/doc/source/getting_started.rst
deleted file mode 100644
index cf58249b..00000000
--- a/doc/source/getting_started.rst
+++ /dev/null
@@ -1,87 +0,0 @@
-.. _getting_started:
-
-***************
-Getting started
-***************
-
-.. _package-purpose:
-
-What this package does
-======================
-
-The purpose of this package is to allow the end user to easily define a set of
-ordinary differential equations (ODE) and obtain information about the ODE by
-invoking the the appropriate methods. Here, we define the set of ODE's
-as
-
-.. math::
- \frac{\partial \mathbf{x}}{\partial t} = f(\mathbf{x},\boldsymbol{\theta})
-
-where :math:`\mathbf{x} = \left(x_{1},x_{2},\ldots,x_{n}\right)` is the state
-vector with :math:`d` state and :math:`\boldsymbol{\theta}` the parameters of
-:math:`p` dimension. Currently, this package allows the user to find the
-algebraic expression of the ODE, Jacobian, gradient and forward sensitivity of
-the ODE. A numerical output is given when all the state and parameter values
-are provided. Note that the only important class is :file:`DeterministicOde`
-where all the functionality described previously are exposed.
-
-The current plan is to extend the functionality to include
-
-* Solving the ode analytically when it is linear
-
-* Analysis of the system via eigenvalues during the integration
-
-* Detection of DAE
-
-
-.. _installing-docdir:
-
-Obtaining the package
-=====================
-
-The location of the package is current on GitHub and can be pulled via https
-from::
-
- https://github.com/PublicHealthEngland/pygom.git
-
-The package is currently as follows::
-
- pygom/
- bin/
- doc/
- pygom/
- loss/
- tests/
- model/
- tests/
- sbml_translate/
- utilR/
- LICENSE.txt
- README.rst
- requirements.txt
- setup.py
-
-with files in each of the three main folders not shown. You can install the
-package via command line::
-
- python setup.py install
-
-or locally on a user level::
-
- python setup.py install --user
-
-Please note that there are current redundant file are kept for development
-purposes for the time being.
-
-.. _testing-the-package:
-
-Testing the package
-===================
-
-Testing can be performed prior or after the installation. Some standard test
-files can be found in their respective folder and they can be run in the command
-line::
-
- python setup.py test
-
-which can be performed prior to installing the package if desired.
diff --git a/doc/source/index.rst b/doc/source/index.rst
deleted file mode 100644
index e4625e13..00000000
--- a/doc/source/index.rst
+++ /dev/null
@@ -1,63 +0,0 @@
-#####################################
-Welcome to the documentation of pygom
-#####################################
-
-PyGOM (Python Generic ODE Model) is a Python package that aims to facilitate the application of ordinary differential equations (ODEs) in the real world,
-with a focus in epidemiology.
-This package helps the user define their ODE system in an intuitive manner and provides convenience functions -
-making use of various algebraic and numerical libraries in the backend - that can be used in a straight forward fashion.
-
-This is an open source project hosted on `Github `_.
-
-A manuscript containing a shortened motivation and use is hosted on `arxXiv `_.
-
-# // TODO Insert intro text
-
-##################
-User Documentation
-##################
-
-.. toctree::
- :maxdepth: 5
-
- getting_started.rst
- #sir.rst
- #notebooks/sir.ipynb
- #transition.rst
- #stochastic.rst
- #unrollOde.rst
- #epi.rst
- #epijson.rst
- #bvpSimple.rst
- #gradient.rst
- #fh.rst
- #estimate1.rst
- #estimate2.rst
- #initialGuess.rst
- #profile.rst
- #common_models.rst
-
-##########################
-Code Documentation and FAQ
-##########################
-
-.. toctree::
- :maxdepth: 5
-
- #faq.rst
- #mod/index.rst
-
-##########
-References
-##########
-
-.. toctree::
-
- #ref.rst
-
-##################
-Indices and tables
-##################
-
-* :ref:`genindex`
-* :ref:`modindex`
diff --git a/doc/source/sir.rst b/doc/source/sir.rst
deleted file mode 100644
index 86ed2b22..00000000
--- a/doc/source/sir.rst
+++ /dev/null
@@ -1,281 +0,0 @@
-.. _sir:
-
-*****************************
-Motivating Example: SIR Model
-*****************************
-
-Defining the model
-==================
-
-First, we are going to go through an SIR model to show the functionality of the package. The SIR model is defined by the following equations
-
-.. math::
-
- \frac{dS}{dt} &= -\beta SI \\
- \frac{dI}{dt} &= \beta SI- \gamma I \\
- \frac{dR}{dt} &= \gamma I.
-
-We can set this up as follows
-
-.. ipython::
-
- In [32]: # first we import the classes require to define the transitions
-
- In [33]: from pygom import Transition, TransitionType
-
- In [34]: # define our state
-
- In [35]: stateList = ['S', 'I', 'R']
-
- In [36]: # and the set of parameters, which only has two
-
- In [37]: paramList = ['beta', 'gamma']
-
- In [38]: # then the set of ode
-
- In [38]: odeList = [
- ....: Transition(origin='S', equation='-beta*S*I', transition_type=TransitionType.ODE),
- ....: Transition(origin='I', equation='beta*S*I - gamma*I', transition_type=TransitionType.ODE),
- ....: Transition(origin='R', equation='gamma*I', transition_type=TransitionType.ODE)
- ....: ]
-
-Here, we have invoke a class from :mod:`Transition` to define the transition object. We proceed here and ignore the details for now. The details of defining a transition object will be covered later in :ref:`transition`. Both the set of states and parameters should be defined when constructing the object, even though not explicitly enforced, to help clarify what we are trying to model. Similarly, this holds for the rest, such as the derived parameters and transitions, where we force the end user to input the different type of transition/process via the corret argument. See :ref:`defining-eqn` for an example when the input is wrong.
-
-.. ipython::
-
- In [39]: # now we import the ode module
-
- In [40]: from pygom import DeterministicOde
-
- In [41]: # initialize the model
-
- In [42]: model = DeterministicOde(stateList,
- ....: paramList,
- ....: ode=odeList)
-
-That is all the information required to define a simple SIR model. We can verify the equations by
-
-.. ipython::
-
- In [40]: model.get_ode_eqn()
-
-where we can see the equations corresponding to their respective :math:`S,I` and :math:`R` state. The set of ode is in the standard :math:`S,I,R` sequence because of how the states are defined initially. We can change them around
-
-.. ipython::
-
- In [59]: # now we are going to define in a different order. note that the output ode changed with the input state
-
- In [60]: stateList = ['R', 'S', 'I']
-
- In [61]: model = DeterministicOde(stateList,
- ....: paramList,
- ....: ode=odeList)
-
- In [62]: model.get_ode_eqn()
-
-and find that the set of ode's still comes out in the correct order with respect to how the states are ordered. In addition to showing the ode in English, we can also display it as either symbols or latex code which save some extra typing when porting the equations to a proper document.
-
-.. ipython::
-
- In [1]: model.print_ode()
-
- In [2]: model.print_ode(True)
-
-The SIR model above was defined as a set of explicit ODEs. An alternative way is to define the model using a series of transitions between the states. We have provided the capability to obtain a *best guess* transition matrix when only the ODEs are available. See the section :ref:`unrollOde` for more information, and in particular :ref:`unrollSimple` for the continuing demonstration of the SIR model.
-
-
-Model information
-=================
-
-The most obvious thing information we wish to know about an ode is whether it is linear
-
-.. ipython::
-
- In [65]: model.linear_ode()
-
-which we know is not for an SIR. So we may want to have a look at the Jacobian say, it is as simple as
-
-.. ipython::
-
- In [64]: model.get_jacobian_eqn()
-
-or maybe we want to know the gradient (of the ode)
-
-.. ipython::
-
- In [65]: model.get_grad_eqn()
-
-Invoking the functions that computes :math:`f(x)` (or the derivatives) like below will output an error (not run)
-
-.. ipython::
-
- In [66]: # model.ode()
-
- In [67]: # model.jacobian()
-
-This is because the some of the functions are used to solve the ode numerically and expect input values of both state and time. But just invoking the two methods above without defining the parameter value, such as the second line below, will also throws an error.
-
-.. ipython::
-
- In [77]: initialState = [0, 1, 1.27e-6]
-
- In [78]: # model.ode(state=initialState, t=1)
-
-It is important to note at this point that the numeric values of the states need to be set in the correct order against the list of states, which can be found by
-
-.. ipython::
-
- In [79]: model.state_list
-
-There is currently no mechanism to set the numeric values of the states along with the state. This is because of implementation issue with external package, such as solving an initial value problem.
-
-Initial value problem
-=====================
-
-Setting the parameters will allow us to evaluate
-
-.. ipython::
-
- In [80]: # define the parameters
-
- In [81]: paramEval = [
- ....: ('beta',0.5),
- ....: ('gamma',1.0/3.0)
- ....: ]
-
- In [82]: model.parameters = paramEval
-
- In [83]: model.ode(initialState, 1)
-
-Now we are well equipped with solving an initial value problem, using standard numerical integrator such as :func:`odeint ` from :mod:`scipy.integrate`. We also used :mod:`matplotlib.pyplot` for plotting and :func:`linspace ` to create the time vector.
-
-.. ipython::
-
- In [96]: import scipy.integrate
-
- In [97]: import numpy
-
- In [98]: t = numpy.linspace(0, 150, 100)
-
- In [99]: solution = scipy.integrate.odeint(model.ode, initialState, t)
-
- In [100]: import matplotlib.pyplot as plt
-
- In [101]: plt.figure();
-
- In [102]: plt.plot(t, solution[:,0], label='R');
-
- In [103]: plt.plot(t, solution[:,1], label='S');
-
- In [104]: plt.plot(t, solution[:,2], label='I');
-
- In [105]: plt.xlabel('Time');
-
- In [106]: plt.ylabel('Population proportion');
-
- In [107]: plt.title('Standard SIR model');
-
- In [108]: plt.legend(loc=0);
-
- @savefig sir_plot.png
- In [109]: plt.show();
-
- In [110]: plt.close()
-
-Where a nice standard SIR progression can be observed in the figure above. Alternatively, we can also integrate and plot via the **ode** object which we have initialized.
-
-.. ipython::
-
- In [1]: model.initial_values = (initialState, t[0])
-
- In [2]: model.parameters = paramEval
-
- In [3]: solution = model.integrate(t[1::])
-
- In [4]: model.plot()
-
-The plot is not shown as it is identical to the one above without the axis labels. Obviously, we can solve the ode above using the Jacobian as well. Unfortunately, it does not help because the number of times the Jacobian was evaluated was zero, as expected given that our set of equations are not stiff.
-
-.. ipython::
-
- In [583]: %timeit solution1, output1 = scipy.integrate.odeint(model.ode, initialState, t, full_output=True)
-
- In [584]: %timeit solution2, output2 = scipy.integrate.odeint(model.ode, initialState, t, Dfun=model.jacobian, mu=None, ml=None, full_output=True)
-
- In [584]: %timeit solution3, output3 = model.integrate(t, full_output=True)
-
-It is important to note that we return our Jacobian as a dense square matrix. Hence, the two argument (mu,ml) for the ode solver was set to ``None`` to let it know the output explicitly.
-
-Solving the forward sensitivity equation
-========================================
-
-Likewise, the sensitivity equations are also solved as an initial value problem. Let us redefine the model in the standard SIR order and we solve it with the sensitivity all set at zero, i.e. we do not wish to infer the initial value of the states
-
-.. ipython::
-
- In [452]: stateList = ['S', 'I', 'R']
-
- In [453]: model = DeterministicOde(stateList,
- .....: paramList,
- .....: ode=odeList)
-
- In [454]: initialState = [1, 1.27e-6, 0]
-
- In [455]: paramEval = [
- .....: ('beta', 0.5),
- .....: ('gamma', 1.0/3.0)
- .....: ]
-
- In [456]: model.parameters = paramEval
-
- In [457]: solution = scipy.integrate.odeint(model.ode_and_sensitivity, numpy.append(initialState, numpy.zeros(6)), t)
-
- In [458]: f,axarr = plt.subplots(3,3);
-
- In [459]: # f.text(0.5,0.975,'SIR with forward sensitivity solved via ode',fontsize=16,horizontalalignment='center',verticalalignment='top');
-
- In [460]: axarr[0,0].plot(t, solution[:,0]);
-
- In [461]: axarr[0,0].set_title('S');
-
- In [462]: axarr[0,1].plot(t, solution[:,1]);
-
- In [463]: axarr[0,1].set_title('I');
-
- In [464]: axarr[0,2].plot(t, solution[:,2]);
-
- In [465]: axarr[0,2].set_title('R');
-
- In [466]: axarr[1,0].plot(t, solution[:,3]);
-
- In [467]: axarr[1,0].set_title(r'state S parameter $\beta$');
-
- In [468]: axarr[2,0].plot(t, solution[:,4]);
-
- In [469]: axarr[2,0].set_title(r'state S parameter $\gamma$');
-
- In [470]: axarr[1,1].plot(t, solution[:,5]);
-
- In [471]: axarr[1,1].set_title(r'state I parameter $\beta$');
-
- In [472]: axarr[2,1].plot(t, solution[:,6]);
-
- In [473]: axarr[2,1].set_title(r'state I parameter $\gamma$');
-
- In [474]: axarr[1,2].plot(t, solution[:,7]);
-
- In [475]: axarr[1,2].set_title(r'state R parameter $\beta$');
-
- In [476]: axarr[2,2].plot(t, solution[:,8]);
-
- In [477]: axarr[2,2].set_title(r'state R parameter $\gamma$');
-
- In [478]: plt.tight_layout();
-
- @savefig sir_sensitivity_plot.png
- In [480]: plt.show();
-
- In [481]: plt.close()
-
-This concludes the introductory example and we will be moving on to look at parameter estimation next in :ref:`estimate1` and the most important part in terms of setting up the ode object; defining the equations in various different ways in :ref:`transition`.
-
diff --git a/docs/_config.yml b/docs/_config.yml
index 3b1d7024..84aebe0b 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -14,6 +14,7 @@ only_build_toc_files: true
# this could avoid the issue of execution timing out
# See https://jupyterbook.org/content/execute.html
execute:
+ allow_errors: true
execute_notebooks: cache
timeout: -1
diff --git a/docs/_toc.yml b/docs/_toc.yml
index 08486baa..2ab98058 100644
--- a/docs/_toc.yml
+++ b/docs/_toc.yml
@@ -4,47 +4,65 @@
format: jb-book
root: md/intro
parts:
- - caption: User documentation
+ - caption: Getting started
chapters:
- - file: md/getting_started
- - file: notebooks/sir
- - file: notebooks/transition
- - file: notebooks/stochastic
- - file: md/unrollOde
- sections:
- - file: notebooks/unroll/unrollSimple
- - file: notebooks/unroll/unrollBD
- - file: notebooks/unroll/unrollHard
- - file: notebooks/epi
- - file: notebooks/epijson
+ - file: md/installation
+ - file: md/building_doc
+ - caption: PyGOM workflow
+ chapters:
+ - file: notebooks/model_spec
+ - file: notebooks/insights
+ sections:
+ - file: notebooks/extract_info
+ #- file: notebooks/epi
+ - file: notebooks/unroll/unrollSimple
+ - file: md/solving
+ sections:
+ - file: notebooks/model_params
+ - file: notebooks/model_solver
+ - file: notebooks/time_dependent_params
+ #- file: notebooks/transition
+ # - file: md/unrollOde
+ # sections:
+ # - file: notebooks/unroll/unrollSimple
+ # - file: notebooks/unroll/unrollBD
+ # - file: notebooks/unroll/unrollHard
- file: md/parameter_fitting
sections:
+ - file: notebooks/paramfit/params_via_abc
+ - file: notebooks/paramfit/params_via_optimization
- file: notebooks/paramfit/bvpSimple
- - file: notebooks/paramfit/gradient
- - file: notebooks/paramfit/fh
- - file: notebooks/paramfit/estimate1
- - file: notebooks/paramfit/estimate2
- - file: notebooks/paramfit/initialGuess
- - file: notebooks/paramfit/profile
+ - file: notebooks/epijson
+ # - file: notebooks/paramfit/gradient
+ # - file: notebooks/paramfit/fh
+ # - file: notebooks/paramfit/estimate1
+ # - file: notebooks/paramfit/estimate2
+ # - file: notebooks/paramfit/initialGuess
+ # - file: notebooks/paramfit/profile
+ #- caption: Appendix
+ # chapters:
+ # - file: notebooks/paramfit/gradient
+ # - file: notebooks/paramfit/profile
- caption: Common biological compartmental models
chapters:
- file: md/common_models
sections:
- file: notebooks/common_models/SIS
+ - file: notebooks/common_models/SIR
+ - file: notebooks/common_models/SEIR
- file: notebooks/common_models/SIS_Periodic
- - file: notebooks/common_models/SIR
- file: notebooks/common_models/SIR_Birth_Death
- - file: notebooks/common_models/SEIR
- - file: notebooks/common_models/SEIR_Multiple
- - file: notebooks/common_models/SEIR_Birth_Death
- - file: notebooks/common_models/SEIR_Birth_Death_Periodic
+ - file: notebooks/common_models/SEIR_Multiple
+ - file: notebooks/common_models/SEIR_Birth_Death_Periodic_Waning_Intro
+ #- file: notebooks/common_models/SEIR_Birth_Death
+ #- file: notebooks/common_models/SEIR_Birth_Death_Periodic
- file: notebooks/common_models/Legrand_Ebola_SEIHFR
- file: notebooks/common_models/Lotka_Volterra
- - file: notebooks/common_models/Lotka_Volterra_4State
+ #- file: notebooks/common_models/Lotka_Volterra_4State
- file: notebooks/common_models/FitzHugh
- - file: notebooks/common_models/Lorenz
- - file: notebooks/common_models/vanDelPol
- - file: notebooks/common_models/Robertson
+ #- file: notebooks/common_models/Lorenz
+ #- file: notebooks/common_models/vanDelPol
+ #- file: notebooks/common_models/Robertson
- caption: Frequently asked questions
chapters:
- file: md/faq
diff --git a/docs/bib/ref.bib b/docs/bib/ref.bib
index b7509dd4..86fd6ec2 100644
--- a/docs/bib/ref.bib
+++ b/docs/bib/ref.bib
@@ -60,16 +60,15 @@ @article{FitzHugh1961
volume = {1},
year = {1961},
}
-@inproceedings{Gillespie1977,
- abstract = {There are two formalisms for mathematically describing the time behavior of a spatially homogeneous chemical system: The deterministic approach regards the time evolution as a continuous, wholly predictable process which is governed by a set of coupled, ordinary differential equations (the "reaction-rate equations"); the stochastic approach regards the time evolution as a kind of random-walk process which is governed by a single differential-difference equation (the "master equation"). Fairly simple kinetic theory arguments show that the stochastic formulation of chemical kinetics has a firmer physical basis than the deterministic formulation, but unfortunately the stochastic master equation is often mathematically intractable. There is, however, a way to make exact numerical calculations within the framework of the stochastic formulation without having to deal with the master equation directly. It is a relatively simple digital computer algorithm which uses a rigorously derived Monte Carlo procedure to numerically simulate the time evolution of the given chemical system. Like the master equation, this "stochastic simulation algorithm" correctly accounts for the inherent fluctuations and correlations that are necessarily ignored in the deterministic formulation. In addition, unlike most procedures for numerically solving the deterministic reaction-rate equations, this algorithm never approximates infinitesimal time increments dt by finite time steps Δt. The feasibility and utility of the simulation algorithm are demonstrated by applying it to several well-known model chemical systems, including the Lotka model, the Brusselator, and the Oregonator.},
- author = {Daniel T. Gillespie},
- doi = {10.1021/j100540a008},
- issn = {00223654},
- issue = {25},
- journal = {Journal of Physical Chemistry},
- title = {Exact stochastic simulation of coupled chemical reactions},
- volume = {81},
- year = {1977},
+@article{Gillespie1977,
+ title={Exact stochastic simulation of coupled chemical reactions},
+ author={Gillespie, Daniel T},
+ journal={The journal of physical chemistry},
+ volume={81},
+ number={25},
+ pages={2340--2361},
+ year={1977},
+ publisher={ACS Publications}
}
@article{Girolami2011,
abstract = {The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis-Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from allows replication of all the results reported. © 2011 Royal Statistical Society.},
@@ -155,6 +154,7 @@ @book{Press2007
author = {William H Press and Saul a Teukolsky and William T Vetterling and Brian P Flannery},
issn = {00361445},
journal = {Sample page from NUMBERICAL RECIPES IN C},
+ publisher = {Cambridge University Press},
title = {Numerical Recipes 3rd Edition: The Art of Scientific Computing},
volume = {1},
year = {2007},
@@ -262,4 +262,14 @@ @book{ruby
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
-}
\ No newline at end of file
+}
+@article{fitzhugh1961impulses,
+ title={Impulses and physiological states in theoretical models of nerve membrane},
+ author={FitzHugh, Richard},
+ journal={Biophysical journal},
+ volume={1},
+ number={6},
+ pages={445--466},
+ year={1961},
+ publisher={Elsevier}
+}
diff --git a/docs/md/building_doc.md b/docs/md/building_doc.md
new file mode 100644
index 00000000..f37d5d96
--- /dev/null
+++ b/docs/md/building_doc.md
@@ -0,0 +1,22 @@
+# Building the documentation locally
+
+The documentation, which you are currently reading, may be built locally.
+First, install additional packages required specifically for the documentation:
+
+```bash
+pip install -r docs/requirements.txt
+```
+
+Then, build the documentation from command line:
+
+```bash
+jupyter-book build docs
+```
+
+The generated HTML files will be saved in the local copy of your repository under:
+
+ pygom/docs/_build/html
+
+You can view the documentation by opening the index file in your browser of choice:
+
+ pygom/docs/_build/html/index.html
\ No newline at end of file
diff --git a/docs/md/common_models.md b/docs/md/common_models.md
index 08965ee7..f29a1c1e 100644
--- a/docs/md/common_models.md
+++ b/docs/md/common_models.md
@@ -1,41 +1,5 @@
# Pre-defined examples - common epi models
-We have defined a set of models {mod}`common_models`, most of them commonly used in epidemiology. They are there
-as examples and also to save time for users. Most of them are of the
-compartmental type, and we use standard naming conventions i.e. **S** =
-Susceptible, **E** = Exposed, **I** = Infectious, **R** = Recovered.
-
-#TODO is R recovered, removed or dead?
-
-Extra state symbol will be introduced when required.
-
-{doc}`../notebooks/common_models/SIS`
-
-{doc}`../notebooks/common_models/SIS_Periodic`
-
-{doc}`../notebooks/common_models/SIR`
-
-{doc}`../notebooks/common_models/SIR_Birth_Death`
-
-{doc}`../notebooks/common_models/SEIR`
-
-{doc}`../notebooks/common_models/SEIR_Multiple`
-
-{doc}`../notebooks/common_models/SEIR_Birth_Death`
-
-{doc}`../notebooks/common_models/SEIR_Birth_Death_Periodic`
-
-{doc}`../notebooks/common_models/Legrand_Ebola_SEIHFR`
-
-{doc}`../notebooks/common_models/Lotka_Volterra`
-
-{doc}`../notebooks/common_models/Lotka_Volterra_4State`
-
-{doc}`../notebooks/common_models/FitzHugh`
-
-{doc}`../notebooks/common_models/Lorenz`
-
-{doc}`../notebooks/common_models/vanDelPol`
-
-{doc}`../notebooks/common_models/Robertson`
-
+We have defined a set of models in the module {mod}`common_models`.
+Most of these draw from commonly used models in epidemiology and are included primarily to save time for users, but also to serve as examples.
+We also include a few models from outside of epidemiology which are commonly used for tasks such as testing numerical solvers.
\ No newline at end of file
diff --git a/docs/md/faq.md b/docs/md/faq.md
index a22e19d8..f9bb00bd 100644
--- a/docs/md/faq.md
+++ b/docs/md/faq.md
@@ -1,76 +1,60 @@
-# Frequent asked questions {#faq}
+# Frequently asked questions
-## Code runs slowly
+```{warning}
+These FAQ's are not particularly up to date and so might not be as frequently asked.
+```
-This is because the package is not optimized for speed. Although the
-some of the main functions are lambdified using
-`sympy`{.interpreted-text role="mod"} or compiled against
-`cython`{.interpreted-text role="mod"} when available, there are many
-more optimization that can be done. One example is the lines:
+## Why does code run slowly?
-in `.DeterministicOde.evalSensitivity`{.interpreted-text role="func"}.
-The first two operations can be inlined into the third and the third
-line itself can be rewritten as:
+This is because the package is not optimized for speed.
+Although some of the main functions are lambdified using {mod}`sympy` or compiled against {mod}`cython` when available, there are many more optimizations that can be done.
-and save the explicit copy operation by `numpy`{.interpreted-text
-role="mod"} when making A. If desired, we could have also made used of
-the `numexpr`{.interpreted-text role="mod"} package that provides
-further speed up on elementwise operations in place of numpy.
+
-## Why not compile the numeric computation form sympy against Theano
+## Why not compile the numeric computation form sympy against Theano?
-Setup of the package has been simplified as much as possible. If you
-look closely enough, you will realize that the current code generation
-only uses `cython`{.interpreted-text role="mod"} and not
-`f2py`{.interpreted-text role="mod"}. This is because we are not
-prepared to do all the system checks, i.e. does a fortran compiler
-exist, is gcc installed, was python built as a shared library etc. We
-are very much aware of the benefit, especially considering the
-possibility of GPU computation in `theano`{.interpreted-text
-role="mod"}.
+Setup of the package has been simplified as much as possible.
+If you look closely enough, you will realize that the current code generation only uses {mod}`cython` and not {mod}`f2py`.
+This is because we are not prepared to do all the system checks, i.e. does a fortran compiler exist, is gcc installed, was python built as a shared library etc.
+We are very much aware of the benefit, especially considering the possibility of GPU computation in {mod}`theano`.
## Why not use mpmath library throughout?
-This is because we have a fair number of operations that depends on
-`scipy`{.interpreted-text role="mod"}. Obviously, we can solve ode using
-`mpmath`{.interpreted-text role="mod"} and do standard linear algebra.
-Unfortunately, optimization and statistics packages and routine are
-mostly based on `numpy`{.interpreted-text role="mod"}.
+This is because we have a fair number of operations that depends on {mod}`scipy`.
+Obviously, we can solve ode using {mod}`mpmath` and do standard linear algebra.
+Unfortunately, optimization and statistics packages and routine are mostly based on {mod}`numpy`.
-## Computing the gradient using `.SquareLoss`{.interpreted-text role="class"} is slow
+## Why is computing the gradient using {class}`.SquareLoss` slow?
-It will always be slow on the first operation. This is due to the design
-where the initialization of the class is fast and only find derivative
-information/compile function during runtime. After the first
-calculation, things should be significantly faster.
+It will always be slow on the first operation.
+This is due to the design where the initialization of the class is fast and only finds derivative information/compile function during runtime. After the first calculation, things should be significantly faster.
-**Why some of my code is not a fortran object?**
+## Why is some of my code not a fortran object?
-When we detec either a $\exp$ or a $\log$ in the equations, we
-automatically force the compile to use mpmath to ensure that we obtain
-the highest precision. To turn this on/off will be considered as a
-feature in the future.
+The following answer may be intended for a different question:
-## Can you not convert a non-autonumous system to an autonomous system for me automatically
+When we detect either a $\exp$ or a $\log$ in the equations, we automatically force the compile to use {mod}`mpmath` to ensure that we obtain
+the highest precision.
+To turn this on/off will be considered as a feature in the future.
-Although we can do that, it is not, and will not be implemented. This is
-to ensure that the end user such as yourself are fully aware of the
-equations being defined.
+## Can you not convert a non-autonumous system to an autonomous system for me automatically?
-## Getting the sensitivities from `.SquareLoss`{.interpreted-text role="class"} did not get a speed up when I used a restricted set of parameters
+Although we can do that, it is not, and will not be implemented.
+This is to ensure that the end user is fully aware of the equations being defined.
-This is because we currently evaluate the full set of sensitivities
-before extracting them out. Speeding this up for a restrictive set is
-being considered. A main reason that stopped us from implementing is
-that we find the symbolic gradient of the ode before compiling it. Which
-means that one function call to the compiled file will return the full
-set of sensitivities and we would only be extracting the appropriate
-elements from the matrix. This only amounts to a small speed up. The
-best method would be to compile only the necessary elements of the
-gradient matrix, but this would require much more work both within the
-code, and later on when variables are being added/deleted as all these
-compilation are perfromed in runtime.
+## Getting the sensitivities from {class}`.SquareLoss` did not get a speed up when I used a restricted set of parameters
-## Why do not have the option to obtain gradient via complex differencing
+This is because we currently evaluate the full set of sensitivities before extracting them out.
+Speeding this up for a restrictive set is being considered.
+A main reason that stopped us from implementing is that we find the symbolic gradient of the ode before compiling it.
+Which means that one function call to the compiled file will return the full set of sensitivities and we would only be extracting the appropriate elements from the matrix.
+This only amounts to a small speed up.
+The best method would be to compile only the necessary elements of the gradient matrix, but this would require much more work both within the
+code, and later on when variables are being added/deleted as all these compilation are performed in runtime.
-It is currently not implemented. Feature under consideration.
+## Why do not have the option to obtain gradient via complex differencing?
+
+It is currently not implemented. Feature under consideration.
\ No newline at end of file
diff --git a/docs/md/getting_started.md b/docs/md/getting_started.md
deleted file mode 100644
index 561ec3e1..00000000
--- a/docs/md/getting_started.md
+++ /dev/null
@@ -1,93 +0,0 @@
-# Getting started
-
-## What does this package do?
-
-The purpose of this package is to allow the end user to easily define a
-set of ordinary differential equations (ODEs) and obtain information
-about the ODEs by invoking the the appropriate methods. Here, we define
-the set of ODEs as
-
-$$\frac{d \mathbf{x}}{d t} = f(\mathbf{x},\boldsymbol{\theta})$$
-
-where $\mathbf{x} = \left(x_{1},x_{2},\ldots,x_{n}\right)$ is the state
-vector with $d$ state and $\boldsymbol{\theta}$ the parameters of $p$
-dimension. Currently, this package allows the user to find the algebraic
-expression of the ODE, Jacobian, gradient and forward sensitivity of the
-ODE. A numerical output is given when all the state and parameter values
-are provided. Note that the only important class is
-{class}`.DeterministicOde` where all the
-functionality described previously are exposed.
-
-Plans for further development can be found, and proposed, on the repository's [issue board](https://github.com/ukhsa-collaboration/pygom/issues).
-
-## Installing the package
-
-PyGOM can be downloaded from the GitHub repository.
-
-https://github.com/PublicHealthEngland/pygom.git
-
-You will need to create an environment, for example using conda.
-
- conda env create -f conda-env.yml
-
-Alternatively, add dependencies to your own environment.
-
- pip install -r requirements.txt
-
-If you are working on a Windows machine you will also need to install:
-- [Graphviz](https://graphviz.org/)
-- [Visual C++](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0)
-- [Visual C++ Build Tools](https://go.microsoft.com/fwlink/?LinkId=691126)
-
-You can install the package via command line:
-
- python setup.py install
-
-or locally on a user level:
-
- python setup.py install --user
-
-```{note}
-The latest fully reviewed version of PyGOM will be on master branch. We recommend that users install the version from this branch.
-```
-
-Alternatively the latest release can be installed from [PyPI](https://pypi.org/project/pygom/):
-
- pip install pygom
-
-Please note that there are some redundant files that are being kept for
-development purposes.
-
-## Testing the package
-
-Test files can be run from the command line prior to or after installation.
-
- python setup.py test
-
-## Building the documentation locally
-
-Install additional packages:
-
- pip install -r docs/requirements.txt
-
-Build the documentation:
-
- jupyter-book build docs/
-
-The html files will be saved in the local copy of your repository under:
-
- pygom/docs/_build/html
-
-
-## Using this documentation
-This documentation is built using [JupyterBook](https://jupyterbook.org/en/stable/intro.html). To use the contents of a notebook as a starting point for trialing or developing your own models and analyses, you can download any of the examples within this documentation by using the download icon on the desired page (located at the top right).
-
-![download file](../images/download.png)
-
-## Contributing to PyGOM
-
-Please see the [contribution guidance](../../CONTRIBUTING.md) which outlines:
-- required information for raising issues;
-- the process by which code contributions should be incorporated;
-- what is required by pull requests to PyGOM, including how to add to the documentation;
-- how we will acknowledge your contributions.
\ No newline at end of file
diff --git a/docs/md/installation.md b/docs/md/installation.md
new file mode 100644
index 00000000..694438ba
--- /dev/null
+++ b/docs/md/installation.md
@@ -0,0 +1,76 @@
+# Installation
+
+Installation instructions may be found on the [GitHub project README](https://github.com/ukhsa-collaboration/pygom/), but we include them here also.
+
+## From source
+
+Source code for PyGOM can be downloaded from the GitHub repository: https://github.com/ukhsa-collaboration/pygom
+
+```bash
+git clone https://github.com/ukhsa-collaboration/pygom.git
+```
+
+Please be aware that there may be redundant files within the package as it is under active development.
+
+```{note}
+The latest fully reviewed version of PyGOM will be on the master branch and we recommend that users install the version from there.
+```
+
+Activate the relevant branch for installation via Git Bash:
+
+```bash
+git activate relevant-branch-name
+```
+
+Package dependencies can be found in the file `requirements.txt`.
+An easy way to install these is to create a new [conda](https://conda.io/docs) environment via:
+
+```bash
+conda env create -f conda-env.yml
+```
+
+which you should ensure is active for the installation process using:
+
+```bash
+conda activate pygom
+```
+
+Alternatively, you may add dependencies to your own environment.
+
+```bash
+pip install -r requirements.txt
+```
+
+The final prerequisite, if you are working on a Windows machine, is that you will also need to install:
+- [Graphviz](https://graphviz.org/)
+- Microsoft Visual C++ 14.0 or greater, which you can get with [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
+
+You you should be able to install the PyGOM package via command line:
+
+```bash
+python setup.py install
+```
+
+If you anticipate making your own frequent changes to the PyGOM source files, it might be more convenient to install in develop mode instead:
+
+```bash
+python setup.py develop
+```
+
+## From PyPI
+
+Alternatively, the latest release can be installed from [PyPI](https://pypi.org/project/pygom/):
+
+```bash
+pip install pygom
+```
+
+# Testing the package
+
+Test files should then be run from the command line to check that installation has completed successfully
+
+```bash
+python setup.py test
+```
+
+This can take some minutes to complete.
diff --git a/docs/md/intro.md b/docs/md/intro.md
index 3f273853..ebbbb55c 100644
--- a/docs/md/intro.md
+++ b/docs/md/intro.md
@@ -1,17 +1,33 @@
# Welcome to the documentation for PyGOM
-PyGOM (Python Generic ODE Model) is a Python package that aims to facilitate the application of ordinary differential equations (ODEs) in the real world,
-with a focus in epidemiology.
-This package helps the user define their ODE system in an intuitive manner and provides convenience functions -
-making use of various algebraic and numerical libraries in the backend - that can be used in a straight forward fashion.
+## What does this package do?
-This is an open source project hosted on [Github](https://github.com/PublicHealthEngland/pygom).
+PyGOM (Python Generic ODE Model) is a Python package which provides a simple interface for users to construct Ordinary Differential Equation (ODE) models, with a focus on compartmental models and epidemiology.
+This is backed by a comprehensive and easy to use tool–box implementing functions to easily perform common operations such as parameter estimation and solving for deterministic or stochastic time evolution.
+The package source is freely available (hosted on [GitHub](https://github.com/ukhsa-collaboration/pygom)) and organized in a way that permits easy extension. With both the algebraic and numeric calculations performed automatically (but still accessible), the end user is freed to focus on model development.
-A manuscript containing a shortened motivation and use is hosted on [arxXiv](https://arxiv.org/abs/1803.06934).
+## What is new in this release?
-#TODO insert intro text
+The main objective of the current release (0.1.8) is to provide more comprehensive documentation on how to use PyGOM.
+The code underlying PyGOM's functionality is largely unchanged since the previous release, barring a few minor bug fixes.
+The only significant changes which previous users should be aware of are:
+- A move away from the {class}`DeterministicOde` class for deterministic simulations and instead employing {class}`SimulateOde` as our do-all class for deterministic or stochastic simulations as well as parameter fitting.
+- Running simulations with random parameters does not require a special simulation function. Instead, PyGOM now recognises the parameter types handed to it (fixed or random) and acts accordingly. This means that stochastic simulations can now be performed with random parameters.
+Both these changes are outlined in more detail in the {doc}`Producing forecasts ` section.
-```{tableofcontents}
-```
+## Using this documentation
+This documentation is built using [JupyterBook](https://jupyterbook.org/en/stable/intro.html).
+Instructions on how to build the documentation locally and where to find it can be found {doc}`here `.
+To use the contents of a notebook as a starting point for trialling or developing your own models and analyses, you can download any of the examples within this documentation by using the download icon on the desired page (located at the top right).
+
+![download file](../images/download.png)
+
+## Contributing to PyGOM
+
+Please see the [contribution guidance](https://github.com/ukhsa-collaboration/pygom/blob/master/CONTRIBUTING.md) which outlines:
+- Required information for raising issues
+- The process by which code contributions should be incorporated
+- What is required by pull requests to PyGOM, including how to add to the documentation
+- How we will acknowledge your contributions
diff --git a/docs/md/parameter_fitting.md b/docs/md/parameter_fitting.md
index 4b252c67..52b6d038 100644
--- a/docs/md/parameter_fitting.md
+++ b/docs/md/parameter_fitting.md
@@ -1,17 +1,6 @@
# Parameter fitting
-The following notebooks will demonstrate how to use the parameter fitting options within PyGOM.
-
-{doc}`../notebooks/paramfit/bvpSimple`
-
-{doc}`../notebooks/paramfit/gradient`
-
-{doc}`../notebooks/paramfit/fh`
-
-{doc}`../notebooks/paramfit/estimate1`
-
-{doc}`../notebooks/paramfit/estimate2`
-
-{doc}`../notebooks/paramfit/initialGuess`
-
-{doc}`../notebooks/paramfit/profile`
\ No newline at end of file
+As well as producing forecasts, another key activity in infectious disease modelling is inference of epidemic parameters from case data.
+In this chapter we outline how PyGOM may be used to assist these endeavours.
+In {doc}`the first section <../notebooks/paramfit/params_via_abc>`, we present a more up to date method which uses Approximate Bayesian Computation (ABC) and {doc}`then <../notebooks/paramfit/params_via_optimization>` a more classical approach via Maximum Likelihood Estimation (MLE).
+We also demonstrate PyGOM's ability to solve the less epidemiologically related task of {doc}`boundary value problems <../notebooks/paramfit/bvpSimple>`
\ No newline at end of file
diff --git a/docs/md/solving.md b/docs/md/solving.md
new file mode 100644
index 00000000..8ad3ed93
--- /dev/null
+++ b/docs/md/solving.md
@@ -0,0 +1,10 @@
+# Producing forecasts
+
+An exercise central to the study of infectious diseases (and indeed ODE models in general) is performing simulations to understand the likely evolution of the system in time.
+PyGOM allows the user to easily obtain numerical solutions for both the deterministic and stochastic time evolution of their model.
+Furthermore, users may specify model parameters to take either fixed values or to be drawn randomly from a probability distribution.
+
+In this chapter, we will use an SIR model as our example system to introduce
+
+- How to prescribe parameters in {doc}`Parameterisation <../notebooks/model_params>`
+- How to obtain solutions and process the model output in {doc}`Finding ODE solutions <../notebooks/model_solver>`
\ No newline at end of file
diff --git a/docs/md/unrollOde.md b/docs/md/unrollOde.md
index 5b53d60c..19b2a56a 100644
--- a/docs/md/unrollOde.md
+++ b/docs/md/unrollOde.md
@@ -1,16 +1,11 @@
# Converting equations into transitions
-As seen previously in {doc}`transition`, we can
-define the model via the transitions or explicitly as ODEs. There are
-times when we all just want to test out some model in a paper and the
-only available information are the ODEs themselves. Even though we know
-that the ODEs come from some underlying transitions, breaking them down
-can be a time consuming process. We provide the functionalities to do
-this automatically.
-
+As seen previously in {doc}`transition`, we can define a model via transitions or explicitly as ODEs.
+There may be times when importing a model from elsewhere and the only available information are the ODEs themselves.
+If it is known that the ODEs come from some underlying transitions, we provide the functionality to do this automatically.
+Of course there is some interpretation...
+Here we demostrate usage of this feature via examples of increasing complexity:
{doc}`../notebooks/unroll/unrollSimple`
-
{doc}`../notebooks/unroll/unrollBD`
-
{doc}`../notebooks/unroll/unrollHard`
diff --git a/docs/notebooks/common_models/FitzHugh.ipynb b/docs/notebooks/common_models/FitzHugh.ipynb
index 9d694ea3..1e9fdc8a 100644
--- a/docs/notebooks/common_models/FitzHugh.ipynb
+++ b/docs/notebooks/common_models/FitzHugh.ipynb
@@ -6,19 +6,19 @@
"source": [
"# FitzHugh\n",
"\n",
- "{func}`.FitzHugh` - the {cite:t}`FitzHugh1961` model without external stimulus\n",
+ "{func}`.FitzHugh` - the {cite:t}`FitzHugh1961` model without external stimulus.\n",
"\n",
- "This is a commonly used model when developing new methodology\n",
- "with regards to ODEs, see {cite:p}`Ramsay2007` and {cite}`Girolami2011`.\n",
- "\n",
- "#TODO why common model?\n",
+ "The FitzHugh model is commonly used to test ODE software {cite:p}`Ramsay2007` {cite}`Girolami2011`, the model itself describes the excitation state of a neuron membrane as an excitation spike passes. PyGOM also includes other functions which are commonly used to test numerical integrators such as:\n",
+ "{func}`.vanDerPol` - the Van der Pol oscillator {cite}`vanderPol1926` and\n",
+ "{func}`.Robertson` - the Robertson reaction {cite}`Robertson1966`.\n",
+ "The FitzHugh model equations are as follows:\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dV}{dt} &= c ( V - \\frac{V^{3}}{3} + R) \\\\\n",
- "\\frac{dR}{dt} &= -\\frac{1}{c}(V - a + bR).\n",
+ "\\frac{\\mathrm{d} V}{\\mathrm{d} t} &= c ( V - \\frac{V^{3}}{3} + R) \\\\\n",
+ "\\frac{\\mathrm{d} R}{\\mathrm{d} t} &= -\\frac{1}{c}(V - a + bR).\n",
"\\end{aligned}$$\n",
"\n",
- "An example of using this model follows.\n"
+ "We solve for the deterministic time evolution of the system:"
]
},
{
@@ -28,41 +28,31 @@
"metadata": {},
"outputs": [],
"source": [
- "import numpy\n",
- "\n",
+ "import numpy as np\n",
"from pygom import common_models\n",
- "\n",
"import matplotlib.pyplot as plt\n",
"\n",
"ode = common_models.FitzHugh({'a':0.2, 'b':0.2, 'c':3.0})\n",
"\n",
- "t = numpy.linspace(0, 20, 101)\n",
- "\n",
+ "t = np.linspace(0, 20, 101)\n",
"x0 = [1.0, -1.0]\n",
- "\n",
"ode.initial_values = (x0, t[0])\n",
"\n",
- "solution = ode.integrate(t[1::])"
+ "solution = ode.solve_determ(t[1::])"
]
},
{
- "cell_type": "code",
- "execution_count": null,
- "id": "ccee969d",
- "metadata": {
- "tags": [
- "hide-input"
- ]
- },
- "outputs": [],
+ "cell_type": "markdown",
+ "id": "a9061aff",
+ "metadata": {},
"source": [
- "ode.plot()"
+ "Plotting the function reveals frequent sharp transitions, which makes it an appropriate system to test ODE solving methods."
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "98a5e32e",
+ "id": "ccee969d",
"metadata": {
"tags": [
"hide-input"
@@ -70,10 +60,7 @@
},
"outputs": [],
"source": [
- "fig = plt.figure()\n",
- "\n",
- "plt.plot(solution[:,0], solution[:,1], 'b')\n",
- "plt.show()\n"
+ "ode.plot()"
]
}
],
diff --git a/docs/notebooks/common_models/Legrand_Ebola_SEIHFR.ipynb b/docs/notebooks/common_models/Legrand_Ebola_SEIHFR.ipynb
index 7affc240..82cbc0db 100644
--- a/docs/notebooks/common_models/Legrand_Ebola_SEIHFR.ipynb
+++ b/docs/notebooks/common_models/Legrand_Ebola_SEIHFR.ipynb
@@ -4,13 +4,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# `Legrand_Ebola_SEIHFR`\n",
+ "# Legrand Ebola SEIHFR\n",
"\n",
"{func}`Legrand_Ebola_SEIHFR`\n",
"\n",
- "A commonly used model in the literature to capture the dynamics of Ebola outbreaks is the\n",
- "SEIHFR model proposed by {cite}`Legrand2007`. There are two extra\n",
- "compartments on top of the SEIR: $H$ for hospitializations and\n",
+ "A commonly used model in the literature to capture the dynamics of Ebola outbreaks is the SEIHFR model proposed by Legrand _et al_ {cite}`legrand2007utd`.\n",
+ "There are two extra compartments on top of the SEIR: $H$ for hospitializations and\n",
"$F$ for funerals. A total of ten parameters (with some describing the\n",
"inverse) are required for the model.\n",
"\n",
@@ -33,7 +32,7 @@
"$\\omega$'s, i.e. $\\omega_{i} = \\gamma_{i}^{-1}$ for $i \\in \\{I,D,H,F\\}$.\n",
"We also used $\\alpha^{-1}$ in our model instead of $\\alpha$ so that\n",
"reading the parameters directly gives a more intuitive meaning. There\n",
- "arw five additional parameters that is derived. The two derived case\n",
+ "are five additional parameters that is derived. The two derived case\n",
"fatality ratio as\n",
"\n",
"$$\\begin{aligned}\n",
@@ -69,8 +68,7 @@
"\n",
"$$\\beta_{F}(t) = \\beta_{F} \\left(1 - \\frac{1}{1 + \\exp(-\\kappa (t - c))} \\right)$$\n",
"\n",
- "A brief example is given here with a slightly more in depth\n",
- "example in {doc}`estimate2`.\n",
+ "A brief example is given here:\n",
"\n"
]
},
@@ -89,8 +87,18 @@
"\n",
"t = numpy.linspace(1, 25, 100)\n",
"\n",
- "ode = common_models.Legrand_Ebola_SEIHFR([('beta_I',0.588), ('beta_H',0.794), ('beta_F',7.653), ('omega_I',10.0/7.0), ('omega_D',9.6/7.0),\n",
- "('omega_H',5.0/7.0), ('omega_F',2.0/7.0), ('alphaInv',7.0/7.0), ('delta',0.81), ('theta',0.80), ('kappa',300.0), ('interventionTime',7.0)])\n",
+ "ode = common_models.Legrand_Ebola_SEIHFR([('beta_I',0.588),\n",
+ " ('beta_H',0.794),\n",
+ " ('beta_F',7.653),\n",
+ " ('omega_I',10.0/7.0),\n",
+ " ('omega_D',9.6/7.0),\n",
+ " ('omega_H',5.0/7.0),\n",
+ " ('omega_F',2.0/7.0),\n",
+ " ('alphaInv',7.0/7.0),\n",
+ " ('delta',0.81),\n",
+ " ('theta',0.80),\n",
+ " ('kappa',300.0),\n",
+ " ('interventionTime',7.0)])\n",
"\n",
"ode.initial_values = (x0, t[0])\n",
"\n",
@@ -108,7 +116,7 @@
"```{note}\n",
"We have standardized the states so that the number of\n",
"susceptible is 1 and equal to the whole population, i.e. $N$ does not\n",
- "exist in our set of ODEs as defined in {mod}`common_models`.\n",
+ "exist in our set of ODEs.\n",
"```"
]
}
diff --git a/docs/notebooks/common_models/Lotka_Volterra.ipynb b/docs/notebooks/common_models/Lotka_Volterra.ipynb
index ddf146b7..d38a2228 100644
--- a/docs/notebooks/common_models/Lotka_Volterra.ipynb
+++ b/docs/notebooks/common_models/Lotka_Volterra.ipynb
@@ -6,14 +6,16 @@
"source": [
"# Lotka Volterra\n",
"\n",
- "{func}`.Lotka_Volterra` - the standard predator and prey model with two states and four parameters {cite}`Lotka1920`\n",
+ "The model {func}`.Lotka_Volterra` is a basic predator and prey model {cite}`Lotka1920`.\n",
+ "This is more commonly expressed in terms of predator and prey population area densities, $x$ and $y$ respectively, though we define the model in terms of absolute numbers, $X$ and $Y$, in a given area, $A$.\n",
+ "This decision to define in terms of population numbers, rather than densities, permits us to perform stochastic simulations.\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dx}{dt} &= \\alpha x - cxy \\\\\n",
- "\\frac{dy}{dt} &= -\\delta y + \\gamma xy\n",
+ "\\frac{\\mathrm{d} X}{\\mathrm{d} t} &= \\alpha X - \\frac{\\beta X Y}{A} \\\\\n",
+ "\\frac{\\mathrm{d} Y}{\\mathrm{d} t} &= -\\gamma Y + \\frac{\\delta X Y}{A}\n",
"\\end{aligned}$$\n",
"\n",
- "with both birth and death processes."
+ "We first solve this model for the deterministic case:"
]
},
{
@@ -24,128 +26,117 @@
"outputs": [],
"source": [
"from pygom import common_models\n",
- "\n",
- "import numpy\n",
- "\n",
+ "import numpy as np\n",
"import matplotlib.pyplot as plt\n",
+ "import math\n",
"\n",
- "x0 = [2.0, 6.0]\n",
+ "# population density of predators and prey per square m\n",
+ "x0 = [1, 0.5]\n",
"\n",
- "ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':2, 'gamma':6})\n",
+ "# total area we wish to consider\n",
+ "area=200\n",
"\n",
- "ode.initial_values = (x0, 0)\n",
+ "# total animal populations\n",
+ "x0 = [x * area for x in x0]\n",
"\n",
- "t = numpy.linspace(0.1, 100, 10000)\n",
+ "ode = common_models.Lotka_Volterra({'alpha':0.1,\n",
+ " 'beta':0.2,\n",
+ " 'gamma':0.3,\n",
+ " 'delta':0.25,\n",
+ " 'A':area})\n",
"\n",
- "solution = ode.integrate(t)\n",
+ "tmax=200 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
"\n",
- "ode.plot()\n"
+ "ode.initial_values = (x0, t[0])\n",
+ "\n",
+ "solution = ode.solve_determ(t[1::])"
]
},
{
"cell_type": "markdown",
- "id": "1943441d",
+ "id": "4d951b55",
"metadata": {},
"source": [
- "Then we can generate the graph at [Wolfram\n",
- "Alpha](http://www.wolframalpha.com/input/?i=lotka-volterra+equations)\n",
- "with varying initial conditions.\n"
+ "We see that the predator and prey populations show periodic behaviour with a phase shift between them."
]
},
{
- "cell_type": "markdown",
- "id": "b5c82937",
- "metadata": {},
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "004de679",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
"source": [
- "x1List = numpy.linspace(0.2, 2.0, 5)\n",
+ "f, ax = plt.subplots(figsize=(10, 2))\n",
"\n",
- "x2List = numpy.linspace(0.6, 6.0, 5)\n",
- "\n",
- "fig = plt.figure()\n"
+ "ax.set_xlabel(\"Time\")\n",
+ "ax.set_ylabel(\"Population number\")\n",
+ "ax.plot(t, solution[:,0], label=\"prey\")\n",
+ "ax.plot(t, solution[:,1], label=\"predator\")\n",
+ "ax.legend(loc=\"upper right\")\n",
+ "plt.show()"
]
},
{
"cell_type": "markdown",
- "id": "5a492117",
+ "id": "5918419c",
"metadata": {},
"source": [
- "\n",
- "solutionList = list()\n",
- "\n"
+ "We can also see how the system evolves stochastically"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "f2a1d859",
+ "id": "9858cacf",
"metadata": {},
"outputs": [],
"source": [
- "ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':2, 'gamma':6})\n",
- "\n",
- "for i in range(len(x1List)): \n",
- " ode.initial_values = ([x1List[i], x2List[i]], 0)\n",
+ "np.random.seed(1)\n",
"\n",
- "solutionList += [ode.integrate(t)]\n",
+ "n_sim=1\n",
+ "solution, simT = ode.solve_stochast(t, n_sim, full_output=True)\n",
"\n",
- "for i in range(len(x1List)):\n",
- " plt.plot(solutionList[i][100::,0], solutionList[i][100::,1], 'b')\n",
+ "f, ax = plt.subplots(figsize=(10, 2))\n",
"\n",
- "plt.xlabel('x')\n",
- "\n",
- "plt.ylabel('y')\n",
+ "y=np.dstack(solution)\n",
"\n",
+ "ax.set_xlabel(\"Time\")\n",
+ "ax.set_ylabel(\"Population number\")\n",
+ "ax.plot(t, y[:,0], label=\"prey\")\n",
+ "ax.plot(t, y[:,1], label=\"predator\")\n",
+ "ax.legend(loc=\"upper right\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
- "id": "c628f283",
+ "id": "39ce9a40",
"metadata": {},
"source": [
- "We also know that the system has the critical points at\n",
- "$x = \\delta / \\gamma$ and $y=\\alpha / c$. If we changes the parameters\n",
- "in such a way that the ration between $x$ and $y$ remains the same, then\n",
- "we get a figure as below.\n"
+ "This appears to be unstable, since the populations undergo increasingly extreme peaks and troughs.\n",
+ "This can be confirmed by examining a phase diagram, whereby the trajectory in state space spirals outwards."
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "837940dd",
+ "id": "b7717689",
"metadata": {},
"outputs": [],
"source": [
- "cList = numpy.linspace(0.1, 2.0, 5)\n",
- "\n",
- "gammaList = numpy.linspace(0.6, 6.0, 5)\n",
- "\n",
- "fig = plt.figure()\n",
- "\n",
- "for i in range(len(x1List)): \n",
- " ode = common_models.Lotka_Volterra({'alpha':1, 'delta':3, 'c':cList[i], 'gamma':gammaList[i]})\n",
- "\n",
- "ode.initial_values = (x0, 0) \n",
- "solutionList += [ode.integrate(t)]\n",
- "\n",
- "for i in range(len(cList)):\n",
- " plt.plot(solutionList[i][100::,0], solutionList[i][100::,1])\n",
- "\n",
- "plt.xlabel('x')\n",
- "\n",
- "plt.ylabel('y')\n",
- "\n",
- "plt.show()\n"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f05ce4a3",
- "metadata": {},
- "source": [
- "\n",
- "\n",
- "where all the cycles goes through the same points."
+ "f, ax = plt.subplots(figsize=(10, 6))\n",
+ "ax.plot(y[:,0], y[:,1])\n",
+ "ax.set_xlabel(\"Prey population\")\n",
+ "ax.set_ylabel(\"Predator population\")\n",
+ "plt.show()"
]
}
],
@@ -156,8 +147,16 @@
"name": "python3"
},
"language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
"name": "python",
- "version": "3.9.15"
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
},
"vscode": {
"interpreter": {
diff --git a/docs/notebooks/common_models/SEIR.ipynb b/docs/notebooks/common_models/SEIR.ipynb
index 3875aa0b..f01627de 100644
--- a/docs/notebooks/common_models/SEIR.ipynb
+++ b/docs/notebooks/common_models/SEIR.ipynb
@@ -7,49 +7,140 @@
"# SEIR\n",
"{func}`.SEIR`\n",
"\n",
- "A natural extension to the SIR is the SEIR model. An extra parameter\n",
- "$\\alpha$, which is the inverse of the incubation period is introduced.\n",
+ "A Susceptible-Exposed-Infectious-Recovered (SEIR) model is a more realistic extension of the standard SIR model in which individuals do not become instantly infectious upon exposure, but undergo an incubation period, the timescale of which is governed by the parameter, $\\alpha$:\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dS}{dt} &= -\\beta SI \\\\\n",
- "\\end{aligned}$$$$\\begin{aligned}\n",
- "\\frac{dE}{dt} &= \\beta SI - \\alpha E \\\\\n",
- "\\end{aligned}$$$$\\begin{aligned}\n",
- "\\frac{dI}{dt} &= \\alpha E - \\gamma I \\\\\n",
- "\\end{aligned}$$$$\\frac{dR}{dt} &= \\gamma I$$\n",
- "\n",
- "We use the parameters from {cite:t}`Aron1984` here to generate our plots,\n",
- "which does not yield a *nice* and *sensible* epidemic curve as the birth\n",
- "and death processes are missing.\n",
- "\n"
+ "\\frac{\\mathrm{d}S}{\\mathrm{d}t} &= - \\frac{\\beta SI}{N} \\\\\n",
+ "\\frac{\\mathrm{d}E}{\\mathrm{d}t} &= \\frac{\\beta SI}{N} - \\alpha E \\\\\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= \\alpha E - \\gamma I \\\\\n",
+ "\\frac{\\mathrm{d}R}{\\mathrm{d}t} &= \\gamma I\n",
+ "\\end{aligned}$$\n",
+ "\n",
+ "We use the flu-like parameters of the SIR model demonstration with an incubation period of 2 days."
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"id": "bd15619d",
"metadata": {},
+ "outputs": [],
"source": [
"from pygom import common_models\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import random\n",
+ "import math\n",
+ "\n",
+ "#####################\n",
+ "# Set up PyGOM object\n",
+ "#####################\n",
+ "\n",
+ "# Parameters\n",
+ "n_pop=1e4\n",
+ "gamma=1/4\n",
+ "alpha=1/2\n",
+ "R0=1.3\n",
+ "beta=R0*gamma\n",
"\n",
- "import numpy\n",
+ "ode = common_models.SEIR({'beta':beta, 'gamma':gamma, 'alpha':alpha, 'N':n_pop})\n",
"\n",
- "ode = common_models.SEIR({'beta':1800, 'gamma':100, 'alpha':35.84})\n",
+ "# Time range and increments\n",
+ "tmax=365 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
"\n",
- "t = numpy.linspace(0, 50, 1001)\n",
+ "# Initial conditions\n",
+ "i0=1\n",
+ "x0=[n_pop-i0, 0, i0, 0]\n",
+ "ode.initial_values = (x0, t[0])\n",
"\n",
- "Ix0 = [0.0658, 0.0007, 0.0002, 0.0]\n",
+ "# Deterministic evolution\n",
+ "solution=ode.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "abe9b989",
+ "metadata": {},
+ "source": [
+ "We also run an SIR model with the same parameters to compare the outputs"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "282ebfbd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode = common_models.SIR({'beta':beta, 'gamma':gamma, 'N':n_pop})\n",
"\n",
+ "x0=[n_pop-i0, i0, 0]\n",
"ode.initial_values = (x0, t[0])\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
+ "solution2=ode.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2fab3cb6",
+ "metadata": {},
+ "source": [
+ "We see that the SEIR model changes the profile of the epidemic as compared with an SIR model, but the overall final sizes are the same."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0403f7c2",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "# Plot\n",
+ "\n",
+ "f, axarr = plt.subplots(1,4, layout='constrained', figsize=(10, 4))\n",
"\n",
- "ode.plot()"
+ "# Plot colours\n",
+ "colours=[\"C1\", \"C3\", \"C0\", \"C2\"]\n",
+ "stateList=[\"S\", \"E\", \"I\", \"R\"]\n",
+ "\n",
+ "for i in range(0, 4):\n",
+ " axarr[i].plot(t, solution[:,i], color=colours[i])\n",
+ " if i in [0,2,3]:\n",
+ " if i in [2,3]:\n",
+ " axarr[i].plot(t, solution2[:,i-1], color=colours[i], linestyle=\"dashed\")\n",
+ " else:\n",
+ " axarr[i].plot(t, solution2[:,i], color=colours[i], linestyle=\"dashed\")\n",
+ " axarr[i].set_ylabel(stateList[i], rotation=0)\n",
+ " axarr[i].set_xlabel('Time')\n",
+ "\n",
+ "plt.show()"
]
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "pygom_development",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
}
},
"nbformat": 4,
diff --git a/docs/notebooks/common_models/SEIR_Birth_Death_Periodic_Waning_Intro.ipynb b/docs/notebooks/common_models/SEIR_Birth_Death_Periodic_Waning_Intro.ipynb
new file mode 100644
index 00000000..eaf90f5a
--- /dev/null
+++ b/docs/notebooks/common_models/SEIR_Birth_Death_Periodic_Waning_Intro.ipynb
@@ -0,0 +1,127 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e5455073",
+ "metadata": {},
+ "source": [
+ "# SEIR, birth, death, periodic, waning and introductions\n",
+ "{func}`.SEIR_Birth_Death_Periodic_Waning_Intro`\n",
+ "\n",
+ "This model includes relatively more detail than the other pre-defined models provided and may serve as a template for more complex models.\n",
+ "\n",
+ "In addition to the processes of births, deaths and seasonal driving, we have included (i) immune waning, which transitions recovered individuals back to susceptible at a rate $w$ and (ii) an external force of infection, which allows individuals to be infected from outside the population (analogous to case importation) at a rate $\\epsilon$.\n",
+ "\n",
+ "$$\\begin{aligned}\n",
+ "\\frac{\\mathrm{d}S}{\\mathrm{d}t} &= - \\frac{\\beta(t) SI}{N} + w R + \\mu N - \\epsilon S - \\mu S\\\\\n",
+ "\\frac{\\mathrm{d}E}{\\mathrm{d}t} &= \\frac{\\beta(t) SI}{N} + \\epsilon S - \\alpha E - \\mu E \\\\\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= \\alpha E - \\gamma I - \\mu I \\\\\n",
+ "\\frac{\\mathrm{d}R}{\\mathrm{d}t} &= \\gamma I - w R - \\mu R \\\\\n",
+ "\\beta(t) &= \\beta_0 \\left(1+\\delta \\cos \\left(\\frac{2 \\pi t}{P} \\right) \\right)\n",
+ "\\end{aligned}$$\n",
+ "\n",
+ "We solve this set of equations deterministically:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 36,
+ "id": "e7321259",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import common_models\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import math\n",
+ "\n",
+ "# Set up PyGOM object\n",
+ "n_pop=1e5\n",
+ "mu=0.01/365\n",
+ "alpha=1/2\n",
+ "gamma=1/4\n",
+ "epsilon=100/(365*n_pop) # approximately 100*n_sus*365/(365*n_pop)=100*frac_sus~30 infections from external sources per year\n",
+ "w=1/(2*365) # waning rate, immunity lasts ~ 2 years.\n",
+ "beta0=1\n",
+ "delta=0.2\n",
+ "period=365\n",
+ "\n",
+ "ode = common_models.SEIR_Birth_Death_Periodic_Waning_Intro({'mu':mu,\n",
+ " 'alpha':alpha,\n",
+ " 'gamma':gamma,\n",
+ " 'epsilon':epsilon,\n",
+ " 'w':w,\n",
+ " 'beta0':beta0,\n",
+ " 'delta':delta,\n",
+ " 'period':period,\n",
+ " 'N':n_pop})\n",
+ "\n",
+ "# Time range and increments\n",
+ "tmax=365*20 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
+ "\n",
+ "# Initial conditions\n",
+ "x0 = [n_pop, 0, 0, 0, t[0]]\n",
+ "\n",
+ "ode.initial_values = (x0, t[0])\n",
+ "\n",
+ "solution=ode.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5eefb897",
+ "metadata": {},
+ "source": [
+ "Plotting the infection prevalence reveals that the system eventually reaches a state of annual epidemics."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "30368b0d",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "f, ax = plt.subplots(figsize=(10, 2))\n",
+ "\n",
+ "ax.set_xlabel(\"Time\")\n",
+ "ax.set_ylabel(\"Infection prevalence\")\n",
+ "ax.plot(t[30000:]/365, solution[30000:,2])\n",
+ "plt.show()"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.9.15 ('sphinx-doc')",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "4dc1e323c80fe09539c74ad5c5a7c7d8d9ff99e04f7b3dbd3680daf878629d6e"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/notebooks/common_models/SEIR_Multiple.ipynb b/docs/notebooks/common_models/SEIR_Multiple.ipynb
index b9ff28df..ced9c1d6 100644
--- a/docs/notebooks/common_models/SEIR_Multiple.ipynb
+++ b/docs/notebooks/common_models/SEIR_Multiple.ipynb
@@ -10,10 +10,10 @@
"Multiple SEIR coupled together, without any birth death process.\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dS_{i}}{dt} &= dN_{i} - dS_{i} - \\lambda_{i}S_{i} \\\\\n",
- "\\frac{dE_{i}}{dt} &= \\lambda_{i}S_{i} - (d+\\epsilon)E_{i} \\\\\n",
- "\\frac{dI_{i}}{dt} &= \\epsilon E_{i} - (d+\\gamma) I_{i} \\\\\n",
- "\\frac{dR_{i}}{dt} &= \\gamma I_{i} - dR_{i}\n",
+ "\\frac{\\mathrm{d} S_{i}}{\\mathrm{d} t} &= dN_{i} - dS_{i} - \\lambda_{i}S_{i} \\\\\n",
+ "\\frac{\\mathrm{d} E_{i}}{\\mathrm{d} t} &= \\lambda_{i}S_{i} - (d+\\epsilon)E_{i} \\\\\n",
+ "\\frac{\\mathrm{d} I_{i}}{\\mathrm{d} t} &= \\epsilon E_{i} - (d+\\gamma) I_{i} \\\\\n",
+ "\\frac{\\mathrm{d} R_{i}}{\\mathrm{d}t} &= \\gamma I_{i} - dR_{i}\n",
"\\end{aligned}$$\n",
"\n",
"where\n",
diff --git a/docs/notebooks/common_models/SIR.ipynb b/docs/notebooks/common_models/SIR.ipynb
index 73dceb7e..6d98dfd9 100644
--- a/docs/notebooks/common_models/SIR.ipynb
+++ b/docs/notebooks/common_models/SIR.ipynb
@@ -8,19 +8,15 @@
"\n",
"{func}`.SIR`\n",
"\n",
- "A standard SIR model defined by the following equations.\n",
+ "The standard Susceptible-Infected-Recovered (SIR) model, which features heavily throughout this documentation, is defined by the following equations:\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dS}{dt} &= -\\beta SI \\\\\n",
- "\\frac{dI}{dt} &= \\beta SI - \\gamma I \\\\\n",
- "\\frac{dR}{dt} &= \\gamma I\n",
+ "\\frac{\\mathrm{d}S}{\\mathrm{d}t} &= - \\frac{\\beta SI}{N} \\\\\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= \\frac{\\beta SI}{N} - \\gamma I \\\\\n",
+ "\\frac{\\mathrm{d}R}{\\mathrm{d}t} &= \\gamma I\n",
"\\end{aligned}$$\n",
"\n",
- "Note that the examples and parameters are taken from {cite:t}`Brauer2008`,\n",
- "namely Figure 1.4. Hence, the first example below may not appear to make\n",
- "much sense.\n",
- "\n",
- "#TODO don't understand\n"
+ "We solve deterministically for flu-like parameters:"
]
},
{
@@ -31,22 +27,36 @@
"outputs": [],
"source": [
"from pygom import common_models\n",
- "\n",
- "import numpy\n",
- "\n",
- "ode = common_models.SIR({'beta':3.6, 'gamma':0.2})\n",
- "\n",
- "t = numpy.linspace(0, 730, 1001)\n",
- "\n",
- "N = 7781984.0\n",
- "\n",
- "x0 = [1.0, 10.0/N, 0.0]\n",
- "\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import random\n",
+ "import math\n",
+ "\n",
+ "#####################\n",
+ "# Set up PyGOM object\n",
+ "#####################\n",
+ "\n",
+ "# Parameters\n",
+ "n_pop=1e4\n",
+ "gamma=1/4\n",
+ "R0=1.3\n",
+ "beta=R0*gamma\n",
+ "\n",
+ "ode = common_models.SIR({'beta':beta, 'gamma':gamma, 'N':n_pop})\n",
+ "\n",
+ "# Time range and increments\n",
+ "tmax=365 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
+ "\n",
+ "# Initial conditions\n",
+ "i0=1\n",
+ "x0=[n_pop-i0, i0, 0]\n",
"ode.initial_values = (x0, t[0])\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
- "\n",
- "ode.plot()\n"
+ "# Deterministic evolution\n",
+ "solution=ode.solve_determ(t[1::])"
]
},
{
@@ -54,25 +64,32 @@
"id": "2b5252ff",
"metadata": {},
"source": [
- "\n",
- "Now we have the more sensible plot, where the initial susceptible population is\n",
- "only a fraction of 1.\n"
+ "Plotting the result recovers the familiar epidemic trajectory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd78eb6c",
- "metadata": {},
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
"outputs": [],
"source": [
- "x0 = [0.065, 123*(5.0/30.0)/N, 0.0]\n",
+ "f, axarr = plt.subplots(1,3, layout='constrained', figsize=(10, 4))\n",
"\n",
- "ode.initial_values = (x0, t[0])\n",
+ "# Plot colours\n",
+ "colours=[\"C1\", \"C0\", \"C2\"]\n",
+ "stateList=[\"S\", \"I\", \"R\"]\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
+ "for i in range(0, 3):\n",
+ " axarr[i].plot(t, solution[:,i], color=colours[i])\n",
+ " axarr[i].set_ylabel(stateList[i], rotation=0)\n",
+ " axarr[i].set_xlabel('Time')\n",
"\n",
- "ode.plot()"
+ "plt.show()"
]
}
],
@@ -83,8 +100,16 @@
"name": "python3"
},
"language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
"name": "python",
- "version": "3.9.15"
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
},
"vscode": {
"interpreter": {
diff --git a/docs/notebooks/common_models/SIR_Birth_Death.ipynb b/docs/notebooks/common_models/SIR_Birth_Death.ipynb
index e93b3599..b0123894 100644
--- a/docs/notebooks/common_models/SIR_Birth_Death.ipynb
+++ b/docs/notebooks/common_models/SIR_Birth_Death.ipynb
@@ -7,16 +7,19 @@
"# SIR, birth and death \n",
"{func}`.SIR_Birth_Death`\n",
"\n",
- "Next, we look at an SIR model with birth and death processes, where populations are added (birth) or removed (death).\n",
+ "Here we consider an SIR model in which individuals may be removed by death from each compartment at a uniform rate per person, $\\gamma$.\n",
+ "The population is replenished via births into the susceptible compartment at the same rate, thus conserving the total population by design.\n",
+ "For deterministic evolution, the population size remains constant whereas for stochastic evolution, the size fluctuates around this value.\n",
+ "The equations are as follows:\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dS}{dt} &= B -\\beta SI - \\mu S \\\\\n",
- "\\frac{dI}{dt} &= \\beta SI - \\gamma I - \\mu I \\\\\n",
- "\\frac{dR}{dt} &= \\gamma I\n",
+ "\\frac{\\mathrm{d}S}{\\mathrm{d}t} &= \\mu N - \\frac{\\beta SI}{N} - \\mu S \\\\\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= \\frac{\\beta SI}{N} - \\gamma I - \\mu I \\\\\n",
+ "\\frac{\\mathrm{d}R}{\\mathrm{d}t} &= \\gamma I - \\mu R\n",
"\\end{aligned}$$\n",
"\n",
- "Continuing from the example above, but now with a much longer time\n",
- "frame. Note that the birth and death rate are the same to maintain a constant population.\n"
+ "As an example, we study stochastic evolution of this system with measles-like parameters in 3 differently sized populations.\n",
+ "This provides a demonstration of threshold population sizes in order to support endemic circulation of certain pathogens."
]
},
{
@@ -27,30 +30,134 @@
"outputs": [],
"source": [
"from pygom import common_models\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import math\n",
"\n",
- "import numpy\n",
+ "#####################\n",
+ "# Set up PyGOM object\n",
+ "#####################\n",
"\n",
- "B = 126372.0/365.0\n",
+ "# Parameters\n",
+ "n_pop=1e4\n",
+ "mu=0.01/365 # birth/death rate 1% per year\n",
+ "gamma=1/20 \n",
+ "R0=15\n",
+ "beta=R0*gamma\n",
"\n",
- "N = 7781984.0\n",
+ "ode = common_models.SIR_Birth_Death({'beta':beta, 'gamma':gamma, 'mu':mu, 'N':n_pop})\n",
"\n",
- "ode = common_models.SIR_Birth_Death({'beta':3.6, 'gamma':0.2, 'B':B/N, 'mu':B/N})\n",
+ "# Time range and increments\n",
+ "tmax=365*10 # maximum time over which to run solver\n",
+ "dt=1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
"\n",
- "t = numpy.linspace(0, 35*365, 10001)\n",
+ "# Initial conditions (endemic equilibrium derived from stationary point)\n",
+ "def sir_bd_endemic_eq(mu, beta, gamma, n_pop):\n",
+ " s0=math.floor((gamma+mu)*n_pop/beta)\n",
+ " i0=math.floor(mu*(n_pop-s0)*n_pop/(beta*s0))\n",
+ " r0=n_pop-(s0+i0)\n",
+ " return [s0, i0, r0]\n",
"\n",
- "x0 = [0.065, 123.0*(5.0/30.0)/N, 0.0]\n",
+ "x0=sir_bd_endemic_eq(mu, beta, gamma, n_pop)\n",
+ "ode.initial_values = (x0, t[0])\n",
+ "\n",
+ "##########\n",
+ "# Simulate\n",
+ "##########\n",
+ "n_sim=10\n",
+ "np.random.seed(1)\n",
+ "\n",
+ "solution, simT = ode.solve_stochast(t, n_sim, full_output=True)\n",
+ "y=np.dstack(solution)\n",
"\n",
+ "############################\n",
+ "# try larger population size\n",
+ "############################\n",
+ "n_pop=1e5\n",
+ "ode = common_models.SIR_Birth_Death({'beta':beta, 'gamma':gamma, 'mu':mu, 'N':n_pop}) # update parameter\n",
+ "x0=sir_bd_endemic_eq(mu, beta, gamma, n_pop) # recalculate IC's\n",
"ode.initial_values = (x0, t[0])\n",
+ "solution_2, simT_2 = ode.solve_stochast(t, n_sim, full_output=True) # simulate\n",
+ "y_2=np.dstack(solution_2)\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
+ "#################################\n",
+ "# try even larger population size\n",
+ "#################################\n",
+ "n_pop=1e6\n",
+ "ode = common_models.SIR_Birth_Death({'beta':beta, 'gamma':gamma, 'mu':mu, 'N':n_pop}) # update parameter\n",
+ "x0=sir_bd_endemic_eq(mu, beta, gamma, n_pop) # recalculate IC's\n",
+ "ode.initial_values = (x0, t[0])\n",
+ "solution_3, simT_3 = ode.solve_stochast(t, n_sim, full_output=True) # simulate\n",
+ "y_3=np.dstack(solution_3)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bbc9ac4f",
+ "metadata": {},
+ "source": [
+ "Plotting the results, we see that for populations of sizes 10,000 and 100,000, the infected population is critically close to zero, such that stochastic fluctuations eventually lead to disease extinction.\n",
+ "This is of course signified by the infected class reaching zero, but also by the recovered and susceptible classes undergoing stable linear growth due to population turnover.\n",
+ "When the population size is 1,000,000, we see that the infected subset, of typical size 500, is able to persist for the full 10 years of the simulation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0f5389b0",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "f, axarr = plt.subplots(3,3, layout='constrained', figsize=(10, 5))\n",
+ "\n",
+ "for i in range(0,3):\n",
+ " # Plot individual trajectories\n",
+ " for j in range(0, n_sim):\n",
+ " axarr[0][i].plot(t/365, y[:,i,j], alpha=0.4, color=\"C0\")\n",
+ " axarr[1][i].plot(t/365, y_2[:,i,j], alpha=0.4, color=\"C1\")\n",
+ " axarr[2][i].plot(t/365, y_3[:,i,j], alpha=0.4, color=\"C2\")\n",
+ "\n",
+ "# Add titles\n",
+ "stateList = ['S', 'I', 'R']\n",
+ "for idx, state in enumerate(stateList):\n",
+ " axarr[0][idx].set_ylabel(state, rotation=0)\n",
+ " axarr[1][idx].set_ylabel(state, rotation=0)\n",
+ " axarr[2][idx].set_ylabel(state, rotation=0)\n",
+ " axarr[0][idx].set_xlabel('Time (years)')\n",
+ " axarr[1][idx].set_xlabel('Time (years)')\n",
+ " axarr[2][idx].set_xlabel('Time (years)')\n",
"\n",
- "ode.plot()"
+ "axarr[0][1].set_title(\"Population size = 10,000\")\n",
+ "axarr[1][1].set_title(\"Population size = 100,000\")\n",
+ "axarr[2][1].set_title(\"Population size = 1,000,000\")\n",
+ "\n",
+ "plt.show()"
]
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "pygom_development",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
}
},
"nbformat": 4,
diff --git a/docs/notebooks/common_models/SIS.ipynb b/docs/notebooks/common_models/SIS.ipynb
index cb09e523..cd9237ee 100644
--- a/docs/notebooks/common_models/SIS.ipynb
+++ b/docs/notebooks/common_models/SIS.ipynb
@@ -7,17 +7,14 @@
"# SIS\n",
"{func}`.SIS`\n",
"\n",
- "A standard SIS model without the total population $N$. We assume here\n",
- "that $S + I = N$ so we can always normalize to 1. The state\n",
- "$S$ is not required for understanding the model because it is a\n",
- "deterministic function of state $I$.\n",
+ "Perhaps the simplest epidemic model is a Susceptible-Infected-Susceptible (SIS) system, in which susceptible individuals may be infected and then do not have any immunity upon recovery.\n",
"\n",
"$$\\begin{aligned}\n",
- "\\frac{dS}{dt} &= -\\beta S I + \\gamma I \\\\\n",
- "\\frac{dI}{dt} &= \\beta S I - \\gamma I.\n",
+ "\\frac{\\mathrm{d}S}{\\mathrm{d}t} &= -\\frac{\\beta S I}{N} + \\gamma I \\\\\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= \\frac{\\beta S I}{N} - \\gamma I.\n",
"\\end{aligned}$$\n",
"\n",
- "An example of an implementation is given below.\n"
+ "We see how this evolves deterministically:"
]
},
{
@@ -28,28 +25,80 @@
"outputs": [],
"source": [
"from pygom import common_models\n",
- "\n",
"import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import math\n",
"\n",
- "import numpy\n",
+ "# Set up PyGOM object\n",
+ "n_pop=1e4\n",
"\n",
- "ode = common_models.SIS({'beta':0.5,'gamma':0.2})\n",
+ "ode = common_models.SIS({'beta':0.5, 'gamma':0.2, 'N':n_pop})\n",
"\n",
- "t = numpy.linspace(0, 20, 101)\n",
+ "# Initial conditions\n",
+ "i0=10\n",
+ "x0 = [n_pop-i0, i0]\n",
"\n",
- "x0 = [1.0, 0.1]\n",
+ "# Time range and increments\n",
+ "tmax=50 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
"\n",
"ode.initial_values = (x0, t[0])\n",
+ "solution=ode.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b272c27d",
+ "metadata": {},
+ "source": [
+ "After sufficiently long time, the system reaches an equilibrium state:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2bce7cd0",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "f, axarr = plt.subplots(1,2, layout='constrained', figsize=(10, 4))\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
+ "# Plot colours\n",
+ "colours=[\"C1\", \"C0\"]\n",
+ "stateList=[\"S\", \"I\"]\n",
"\n",
- "ode.plot()\n"
+ "for i in range(0, 2):\n",
+ " axarr[i].plot(t, solution[:,i], color=colours[i])\n",
+ " axarr[i].set_ylabel(stateList[i], rotation=0)\n",
+ " axarr[i].set_xlabel('Time')\n",
+ "\n",
+ "plt.show()"
]
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "pygom_development",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
}
},
"nbformat": 4,
diff --git a/docs/notebooks/common_models/SIS_Periodic.ipynb b/docs/notebooks/common_models/SIS_Periodic.ipynb
index c19fe77d..622cfc6b 100644
--- a/docs/notebooks/common_models/SIS_Periodic.ipynb
+++ b/docs/notebooks/common_models/SIS_Periodic.ipynb
@@ -7,17 +7,30 @@
"# SIS, periodic\n",
"{func}`.SIS_Periodic`\n",
"\n",
- "Now we look at an extension of the SIS model by incorporating a periodic\n",
- "contact rate. Note how our equation is defined by a single ODE for state\n",
- "**I**.\n",
+ "This is an extension of the SIS model which incorporates a periodic infection rate, $\\beta(t)$.\n",
+ "This could be used to mimic seasonal variation in infectivity due to yearly contact rate patterns or climate drivers, for example.\n",
+ "We define $\\beta(t)$ as follows:\n",
"\n",
- "$$\\frac{dI}{dt} = (\\beta(t)N - \\alpha) I - \\beta(t)I^{2}$$\n",
+ "$$\\begin{aligned}\n",
+ "\\beta(t) &= \\beta_0 \\left(1+\\delta \\cos \\left(\\frac{2 \\pi t}{P} \\right) \\right)\n",
+ "\\end{aligned}$$\n",
"\n",
- "where $\\beta(t) = 2 - 1.8 \\cos(5t)$. As the name suggests, it achieves a\n",
- "(stable) periodic solution. Note how the plots have two sub-graphs,\n",
- "where $\\tau$ is in fact our time component which we have taken out of\n",
- "the original equation when converting it to a autonomous system.\n",
- "\n"
+ "where $\\beta_0$ is the baseline infection rate, $\\delta$ is the magnitude of oscillations from the baseline ($-1<\\delta<1$ so that $\\beta>0$) and $P$ is the period of oscillations.\n",
+ "\n",
+ "Also, note how we can use $I+S=N$ to eliminate the equation for $S$:\n",
+ "\n",
+ "$$\\begin{aligned}\n",
+ "\\frac{\\mathrm{d}I}{\\mathrm{d}t} &= (\\beta(t)N - \\alpha) I - \\beta(t)I^{2} \\\\\n",
+ "\\end{aligned}$$\n",
+ "\n",
+ "In Heathcote's classical model, $\\gamma=1$ and:\n",
+ "\n",
+ "$$\\begin{aligned}\n",
+ "\\beta(t) &= 2 - 1.8 \\cos(5t) \\\\\n",
+ "&= 2\\left(1 - 0.9 \\cos \\left( \\frac{2 \\pi t}{ \\frac{2 \\pi}{5} } \\right) \\right)\n",
+ "\\end{aligned}$$\n",
+ "\n",
+ "so that $\\beta_0=2$, $\\delta=0.9$ and $P=\\frac{2 \\pi}{5}$."
]
},
{
@@ -28,22 +41,54 @@
"outputs": [],
"source": [
"from pygom import common_models\n",
- "\n",
"import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import math\n",
"\n",
- "import numpy\n",
+ "# Set up PyGOM object\n",
+ "n_pop=1e4\n",
"\n",
- "ode = common_models.SIS_Periodic({'alpha':1.0})\n",
+ "ode = common_models.SIS_Periodic({'gamma':1, 'beta0':2, 'delta':0.9, 'period':(2*math.pi/5), 'N':n_pop})\n",
"\n",
- "t = numpy.linspace(0, 10, 101)\n",
+ "# Time range and increments\n",
+ "tmax=10 # maximum time over which to run solver\n",
+ "dt=0.01 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated\n",
"\n",
- "x0 = [0.1,0.]\n",
+ "# Initial conditions\n",
+ "i0=0.1*n_pop\n",
+ "x0 = [i0, t[0]]\n",
"\n",
"ode.initial_values = (x0, t[0])\n",
+ "solution=ode.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5eefb897",
+ "metadata": {},
+ "source": [
+ "We plot the infected trajectory which shows periodic evolution."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "510dd216",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "f, ax = plt.subplots(figsize=(10, 4))\n",
"\n",
- "solution = ode.integrate(t[1::])\n",
- "\n",
- "ode.plot()"
+ "ax.set_xlabel(\"Time\")\n",
+ "ax.set_ylabel(\"I\", rotation=0)\n",
+ "ax.plot(t, solution[:,0])\n",
+ "plt.show()"
]
}
],
@@ -54,8 +99,16 @@
"name": "python3"
},
"language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
"name": "python",
- "version": "3.9.15"
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
},
"vscode": {
"interpreter": {
diff --git a/docs/notebooks/epi.ipynb b/docs/notebooks/epi.ipynb
index 8b47d848..9e57058c 100644
--- a/docs/notebooks/epi.ipynb
+++ b/docs/notebooks/epi.ipynb
@@ -6,15 +6,9 @@
"source": [
"# Epidemic Analysis\n",
"\n",
- "A common application of ODEs is in the field\n",
- "of epidemiology modeling, where compartmental models are\n",
- "used to describe disease progression through a population. \n",
"We demonstrate some of the simpler algebraic analysis that you may wish to undertake on a compartmental model.\n",
"\n",
- "We revisit the SIR model with birth and death\n",
- "processes, which is an extension of the one in {doc}`sir`. \n",
- "\n",
- "First, we initialize the model, this time by importing it from {mod}`.common_models`, rather than constructing it ourselves."
+ "First, we initialize an SIR model, this time by importing it from {mod}`.common_models`, rather than constructing it ourselves:"
]
},
{
@@ -22,20 +16,45 @@
"execution_count": 1,
"id": "8c84ea26",
"metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import common_models\n",
+ "\n",
+ "ode = common_models.SIR_Birth_Death()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8432a422",
+ "metadata": {},
+ "source": [
+ "We can verify"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "f7610d25",
+ "metadata": {},
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Matrix([[B - I*S*beta - S*mu], [I*S*beta - I*gamma - I*mu], [I*gamma]])\n"
- ]
+ "data": {
+ "text/latex": [
+ "$\\displaystyle \\left[\\begin{matrix}B - I S \\beta - S \\mu\\\\I S \\beta - I \\gamma - I \\mu\\\\I \\gamma\\end{matrix}\\right]$"
+ ],
+ "text/plain": [
+ "Matrix([\n",
+ "[ B - I*S*beta - S*mu],\n",
+ "[I*S*beta - I*gamma - I*mu],\n",
+ "[ I*gamma]])"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
}
],
"source": [
- "from pygom import common_models\n",
- "\n",
- "ode = common_models.SIR_Birth_Death()\n",
- "\n",
"ode.get_ode_eqn()"
]
},
@@ -44,18 +63,9 @@
"id": "ea8c4b15",
"metadata": {},
"source": [
- "\n",
"## Obtaining the reproduction number (R0)\n",
"\n",
- "The reproduction number, also known as the $R_{0}$, is the single most\n",
- "powerful piece and reduced piece of information available from an epidemiological\n",
- "compartmental model. This value represents the number of the disease-naive population who can be infected by a single member of the infectious population. When the parameter values are known, $R_{0}$ provides a single number which can then lead to an interpretation of the system, where $R_{0} = 1$ defines the tipping point of an outbreak. An $R_{0}$ value of\n",
- "more than one signifies growth of cases (a potential outbreak), and an $R_{0}$ of less than one\n",
- "indicates that the disease will stop spreading naturally.\n",
- "\n",
- "#TODO reference\n",
- "\n",
- "To obtain the $R_{0}$, we need have to tell the {func}`.R0` function which states\n",
+ "To obtain $R_{0}$, we need have to tell the {func}`.R0` function which states\n",
"represent the *disease state*, which in this case is the state **I**.\n",
"\n",
"#TODO is this the disease state, or the infectious state?"
@@ -105,19 +115,15 @@
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 4,
"id": "717c6868",
"metadata": {},
"outputs": [
{
- "ename": "NameError",
- "evalue": "name 'disease_progression_matrices' is not defined",
- "output_type": "error",
- "traceback": [
- "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
- "\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
- "Cell \u001b[1;32mIn[1], line 1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m F, V \u001b[39m=\u001b[39m disease_progression_matrices(ode, \u001b[39m'\u001b[39m\u001b[39mI\u001b[39m\u001b[39m'\u001b[39m)\n\u001b[0;32m 3\u001b[0m e \u001b[39m=\u001b[39m R0_from_matrix(F, V)\n\u001b[0;32m 5\u001b[0m \u001b[39mprint\u001b[39m(e)\n",
- "\u001b[1;31mNameError\u001b[0m: name 'disease_progression_matrices' is not defined"
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[S*beta/(gamma + mu)]\n"
]
}
],
@@ -126,7 +132,7 @@
"\n",
"e = R0_from_matrix(F, V)\n",
"\n",
- "print(e)\n"
+ "print(e)"
]
},
{
@@ -174,7 +180,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.15"
+ "version": "3.9.19"
},
"vscode": {
"interpreter": {
diff --git a/docs/notebooks/epijson.ipynb b/docs/notebooks/epijson.ipynb
index 882abe8b..73ddd29e 100644
--- a/docs/notebooks/epijson.ipynb
+++ b/docs/notebooks/epijson.ipynb
@@ -6,20 +6,10 @@
"source": [
"# Reading and using EpiJSON data\n",
"\n",
- "Epidemiology data is complicated due to the many different stages a\n",
- "patient can go through and whether a modeling technique is applicable\n",
- "depends heavily on the recording of data. [EpiJSON](https://github.com/Hackout2/EpiJSON) is a framework which\n",
- "tries to captures all the information in a JSON format {cite}`Finnie2016`.\n",
- "\n",
- "PyGOM provides the functionality to process EpiJSON data. Due to\n",
- "the nature of this package, modeling of ODEs, data files are processed with this in mind. The output is therefore in the cumulative form as\n",
- "default, shown below, in a [`pandas.DataFrame`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) format. \n",
- "\n",
- "#TODO unsure what this means\n",
- "\n",
- "\n",
- "The input can be\n",
- "in a string format, a file or already a `dict`."
+ "[EpiJSON](https://github.com/Hackout2/EpiJSON) is a framework which tries to capture epidemiological information in a JSON format {cite}`Finnie2016`.\n",
+ "PyGOM provides the functionality to process EpiJSON data with a view to preparing it for its various modelling features previously discussed in this guide.\n",
+ "The input can be in a string format, a file or already a `dict`.\n",
+ "The output is in the cumulative form as default, shown below, in a [`pandas.DataFrame`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) format. "
]
},
{
@@ -30,13 +20,10 @@
"outputs": [],
"source": [
"from pygom.loss.read_epijson import epijson_to_data_frame\n",
- "\n",
"import pkgutil\n",
"\n",
"data = pkgutil.get_data('pygom', 'data/eg1.json')\n",
- "\n",
"df = epijson_to_data_frame(data)\n",
- "\n",
"print(df)"
]
},
@@ -62,7 +49,7 @@
"\n",
"from pygom.loss.epijson_loss import EpijsonLoss\n",
"\n",
- "ode = common_models.SIR([0.5, 0.3])\n",
+ "ode = common_models.SIR_norm([0.5, 0.3])\n",
"\n",
"obj = EpijsonLoss([0.005, 0.03], ode, data, 'Death', 'R', [300, 2, 0])\n",
"\n",
@@ -84,18 +71,14 @@
"id": "a5ac54c8",
"metadata": {},
"source": [
- "Given an initialized object, all the operations are inherited from\n",
- "{class}`.BaseLoss`. We demonstrated above how to calculate the cost\n",
- "and the rest will not be shown for brevity. The data frame is stored\n",
- "inside of the loss object and can be retrieved for inspection at any\n",
- "time point.\n",
+ "Given an initialized object, all the operations are inherited from {class}`.BaseLoss`.\n",
+ "We demonstrated above how to calculate the cost and the rest will not be shown for brevity.\n",
+ "The data frame is stored inside of the loss object and can be retrieved for inspection at any time point.\n",
"\n",
"```{note}\n",
- "Initial values for the states are required,\n",
- "but the time is not. When the time is not supplied, then the first time\n",
- "point in the data will be treated as $t0$. The input Death indicates which column of the data is used\n",
- "and $R$ the corresponding state the data belongs to.\n",
+ "Initial values for the states are required, but the time is not.\n",
+ "When the time is not supplied, then the first time point in the data will be treated as $t0$.\n",
+ "The input Death indicates which column of the data is used and $R$ the corresponding state the data belongs to.\n",
"```"
]
},
diff --git a/docs/notebooks/extract_info.ipynb b/docs/notebooks/extract_info.ipynb
new file mode 100644
index 00000000..84875e30
--- /dev/null
+++ b/docs/notebooks/extract_info.ipynb
@@ -0,0 +1,229 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e69f436b",
+ "metadata": {},
+ "source": [
+ "# Extracting model information\n",
+ "\n",
+ "In the study of ODE systems, there are many calculations which are frequently performed and PyGOM has some functionality to provide assistance.\n",
+ "We will again use the SIR model as our example system, but this time we will make use of the PyGOM `common_models` module, where many predefined models are stored.\n",
+ "This means we avoid having to build the model from scratch again, saving time and lines of code.\n",
+ "Here we initialise a `SIR` model:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "d499587e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import common_models\n",
+ "ode = common_models.SIR()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5defac16",
+ "metadata": {},
+ "source": [
+ "## Verification\n",
+ "\n",
+ "As seen previously, the {func}`.get_ode_eqn` function allows us to verify that our ODE equations are as we'd expect:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5a2be3c1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_ode_eqn()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d9f4249d",
+ "metadata": {},
+ "source": [
+ "```{tip}\n",
+ "In addition to showing the Python equation form of the ODEs, we can also display them as either symbols or latex code, which can save some extra typing when porting the equations to another document.\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2089ef15",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.print_ode()\n",
+ "ode.print_ode(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d9d71015",
+ "metadata": {},
+ "source": [
+ "We can check the model definition in terms of a transition matrix:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0ea90388",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_transition_matrix()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9c7b5c60",
+ "metadata": {},
+ "source": [
+ "where only the upper off diagonal triangle is necessary to fully define the system.\n",
+ "\n",
+ "We can even inspect the transitions graphically:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e04194ea",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_transition_graph();"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f77cfefa",
+ "metadata": {},
+ "source": [
+ "## Algebraic insights\n",
+ "\n",
+ "We briefly outline some of the algebraic results which can be quickly accessed by PyGOM. Firstly, we can check if our system is linear:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1d55ef11",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.linear_ode()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c3fa0e62",
+ "metadata": {},
+ "source": [
+ "For stability analysis and speeding up numerical integrators, it may be useful to know the Jacobian, Hessian (where three 2D arrays are returned, rather than one 3D array) or gradient which PyGOM has functions for respectively:\n",
+ "\n",
+ "```{warning}\n",
+ "In different contexts it can be useful to know the derivatives with respect to the state variables or the parameters. Make sure you know which one you require and check that the PyGOM function you are using provides it.\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9c6ec971",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_jacobian_eqn()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0a8b4ff6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_hessian_eqn()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5e1c7dfb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode.get_grad_eqn()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3f572e7c",
+ "metadata": {},
+ "source": [
+ "## Epidemiology specific insights\n",
+ "\n",
+ "Under development are functions to obtain numeric and algebraic expressions for the basic reproduction number, $R_0$.\n",
+ "Currently, these can be obtained in two steps, first by finding the next generation matrix and then calculating $R_0$ from this, assuming in the initial conditions that $S(0)=N$.\n",
+ "We must specify which state represents the *infectious state*, which in this case is the state **I**."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "d099d92b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[S*beta/(N*gamma)]\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pygom.model.epi_analysis import *\n",
+ "\n",
+ "F, V = disease_progression_matrices(ode, 'I')\n",
+ "e = R0_from_matrix(F, V)\n",
+ "print(e)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.9.15 ('sphinx-doc')",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "4dc1e323c80fe09539c74ad5c5a7c7d8d9ff99e04f7b3dbd3680daf878629d6e"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/notebooks/insights.ipynb b/docs/notebooks/insights.ipynb
new file mode 100644
index 00000000..e9a43f3b
--- /dev/null
+++ b/docs/notebooks/insights.ipynb
@@ -0,0 +1,41 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e395ad9b",
+ "metadata": {},
+ "source": [
+ "# ODE Insights\n",
+ "\n",
+ "Now that our SIR model is encapsulated in the {class}`.SimulateOde` class, it is ready to be studied using PyGOM's various functionalities.\n",
+ "Before moving on to more complex methods such as parameter fitting and simulation, we can take advantage of several useful features of PyGOM which provide us with more analytical insights - the sort we might commonly find ourselves calculating, requiring pen and paper to do so."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.9.15 ('sphinx-doc')",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "4dc1e323c80fe09539c74ad5c5a7c7d8d9ff99e04f7b3dbd3680daf878629d6e"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/notebooks/model_params.ipynb b/docs/notebooks/model_params.ipynb
new file mode 100644
index 00000000..7ddbbf50
--- /dev/null
+++ b/docs/notebooks/model_params.ipynb
@@ -0,0 +1,152 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e395ad9b",
+ "metadata": {},
+ "source": [
+ "# Parameterisation\n",
+ "\n",
+ "Until now, we have only dealt with parameters when it was necessary to inform PyGOM which of our symbols refer to states and which to parameters.\n",
+ "However, before PyGOM can find numerical solutions to the equations, it must be fed numerical parameter values.\n",
+ "PyGOM's ODE solvers accept parameters in two forms: fixed, where they remain constant, or random, where they are drawn from a given distribution.\n",
+ "We demonstrate these features on our model system, the SIR compartmental model.\n",
+ "We start, as always, by encapsulating our ODE system in a PyGOM object, in this case loading a previously defined model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1679a48a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import common_models\n",
+ "ode = common_models.SIR()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "404cea05",
+ "metadata": {},
+ "source": [
+ "## Fixed parameters\n",
+ "\n",
+ "Defining fixed parameters for $\\beta$, $\\gamma$ and $N$ is simply done via a list of tuples as follows:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cc1bd57c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fixed_param_set=[('beta', 0.3), ('gamma', 0.25), ('N', 1e4)]\n",
+ "ode.parameters=fixed_param_set"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b7f1018f",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "## Random parameters\n",
+ "\n",
+ "Instead, imagine that we have some prior uncertainty on the values of our model parameters.\n",
+ "We may wish to reflect this by running model simulations over a variety of parameter values drawn randomly from a probability distribution.\n",
+ "A suitable choice of distribution for $\\gamma$ and $\\beta$ is a gamma distribution, since it ensures that both parameters are positive as required.\n",
+ "In this example, we'll keep the total population, $N$, fixed, showing that a mixture of parameter types (fixed and random) is possible.\n",
+ "\n",
+ "To define our random distributions, we make use of the familiar syntax from [R](http://www.r-project.org/).\n",
+ "Slightly cumbersomely, we have to define it via a tuple, where the first item is the function handle (name) and the second the parameters. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "52734403",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom.utilR import rgamma\n",
+ "random_param_set = dict() # container for random param set\n",
+ "random_param_set['gamma'] = (rgamma,{'shape':100, 'rate':400})\n",
+ "random_param_set['beta'] = (rgamma,{'shape':100, 'rate':333.33})\n",
+ "random_param_set['N'] = 1e4"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a4c4ca97",
+ "metadata": {},
+ "source": [
+ "The values of the shape and rate parameters mean that $\\gamma$ and $\\beta$ have means of 0.25 and 0.3 and standard deviations of 0.025 and 0.03 respectively.\n",
+ "When changing parameters, it is a good idea to define a new {class}`.SimulateOde` object, since there may be some calculated variables leftover from the previous parameter set.\n",
+ "We do not need to inform PyGOM that the parameters are random and define them in the same way as before:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "24447ca7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ode = common_models.SIR()\n",
+ "ode.parameters=random_param_set"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b05c629f",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.9.15 ('sphinx-doc')",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.19"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "4dc1e323c80fe09539c74ad5c5a7c7d8d9ff99e04f7b3dbd3680daf878629d6e"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/notebooks/model_solve.ipynb b/docs/notebooks/model_solve.ipynb
new file mode 100644
index 00000000..7ad3babf
--- /dev/null
+++ b/docs/notebooks/model_solve.ipynb
@@ -0,0 +1,516 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "efbf9af8",
+ "metadata": {
+ "tags": [
+ "remove-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "# Reload all previous stuff, not sure how to do this without redoing everything...\n",
+ "stateList = ['S', 'I', 'R']\n",
+ "paramList = ['beta', 'gamma']\n",
+ "from pygom import Transition, TransitionType\n",
+ "odeList = [\n",
+ " Transition(origin='S', equation='-beta*S*I', transition_type=TransitionType.ODE),\n",
+ " Transition(origin='I',equation='beta*S*I - gamma*I', transition_type=TransitionType.ODE),\n",
+ " Transition(origin='R', equation='gamma*I', transition_type=TransitionType.ODE) \n",
+ "]\n",
+ "transList = [\n",
+ " Transition(origin='S', destination='I', equation='beta*S*I', transition_type=TransitionType.T),\n",
+ " Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "133de424",
+ "metadata": {},
+ "source": [
+ "# Solving the model\n",
+ "\n",
+ "We will now find deterministic solutions to the SIR model.\n",
+ "First we must import the relevant class"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "06b092c7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import DeterministicOde"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "46f1fc79",
+ "metadata": {},
+ "source": [
+ "Now we initialize the class, which constructs our ODE system from all the information we have provided.\n",
+ "For now, let's use both approaches:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d2d00708",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model = DeterministicOde(stateList, paramList, ode=odeList)\n",
+ "model2 = DeterministicOde(stateList, paramList, transition=transList)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "db3a166f",
+ "metadata": {},
+ "source": [
+ "We can verify the model equations are what we'd expect by using the `get_ode_eqn()` function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "39171530",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(model.get_ode_eqn())\n",
+ "print(model2.get_ode_eqn())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bc3f2c5c",
+ "metadata": {},
+ "source": [
+ "where we can see that building the model via equations or transitions results in the same equations corresponding to their respective $S$, $I$ and $R$ state.\n",
+ "From now on, we proceed with just `model`, safe in the knowledge that they are the same."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0d06bdf9",
+ "metadata": {},
+ "source": [
+ "```{tip}\n",
+ "In addition to showing the equation form of the ODEs, we can also display them as either symbols or latex code, which can save some extra typing when porting the equations to another document.\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "83ea4dcb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model.print_ode()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6c05b408",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model.print_ode(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3e7b28f3",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bdcecb64",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7b7f8220",
+ "metadata": {},
+ "source": [
+ "## Initial value problem\n",
+ "\n",
+ "We can calculate the time evolution of the system given the values of the initial conditions and parameters."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f6f00b34",
+ "metadata": {},
+ "source": [
+ "1. Define the model parameters. We can call `parameters` to check what is required"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c163aa2f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model.parameters"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9f56e201",
+ "metadata": {},
+ "source": [
+ "we then pass them to the class via a list of tuples"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "696476fb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "paramEval = [('beta',0.5), ('gamma',1.0/3.0)]\n",
+ "model.parameters = paramEval"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f1a505b5",
+ "metadata": {},
+ "source": [
+ "and can verify that this was successful"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a94733b2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model.parameters"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e80ac7f8",
+ "metadata": {},
+ "source": [
+ "2. Provide initial conditions for the states."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2b45e43c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "i0=1e-6\n",
+ "initialState = [1-i0, i0, 0]\n",
+ "\n",
+ "model.ode(state=initialState, t=1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "22c4063f",
+ "metadata": {},
+ "source": [
+ "```{note}\n",
+ "Fractional SIR models are subject to the constraint $S(t)+I(t)+R(t)=1$. It is up to the user to ensure that the initial conditions adhere to any constraints.\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "105524d4",
+ "metadata": {},
+ "source": [
+ "\n",
+ "3. Implement an ODE solver.\n",
+ "\n",
+ "We are well equipped to solve an initial value problem, using the standard numerical integrator such as `odeint ` from `scipy.integrate`. We also used `matplotlib.pyplot` for plotting and `linspace ` to create the time vector."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6c9e662c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import scipy.integrate\n",
+ "import numpy\n",
+ "\n",
+ "t = numpy.linspace(0, 150, 100)\n",
+ "\n",
+ "solution = scipy.integrate.odeint(model.ode, initialState, t)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "75ccb76e",
+ "metadata": {},
+ "source": [
+ "We can plot our solution to observe a standard SIR shape."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5badfc50",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "plt.figure()\n",
+ "plt.plot(t, solution[:,0], label='S')\n",
+ "plt.plot(t, solution[:,1], label='I')\n",
+ "plt.plot(t, solution[:,2], label='R')\n",
+ "plt.xlabel('Time')\n",
+ "plt.ylabel('Population proportion')\n",
+ "plt.title('Standard SIR model')\n",
+ "plt.legend(loc=0)\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2caa261e",
+ "metadata": {},
+ "source": [
+ "Alternatively, we can integrate and plot via the **ode** object which we initialized earlier."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b71d2931",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model.initial_values = (initialState, t[0])\n",
+ "\n",
+ "model.parameters = paramEval\n",
+ "\n",
+ "solution = model.integrate(t[1::])\n",
+ "\n",
+ "model.plot()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "58b41bf9",
+ "metadata": {},
+ "source": [
+ "We could solve the ODEs above using the Jacobian as well. Unfortunately, it does not help because the number of times the Jacobian was evaluated was zero, as expected given that our set of equations are not stiff."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e887ac3e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#TODO what does this show?\n",
+ "%timeit solution1, output1 = scipy.integrate.odeint(model.ode, initialState, t, full_output=True)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5d3c7ddd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "%timeit solution2, output2 = scipy.integrate.odeint(model.ode, initialState, t, Dfun=model.jacobian, mu=None, ml=None, full_output=True)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2de91b9e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "%timeit solution3, output3 = model.integrate(t, full_output=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f4707964",
+ "metadata": {},
+ "source": [
+ "It is important to note that we return our Jacobian as a dense square matrix. Hence, the two argument (mu,ml) for the ODE solver was set to `None` to let it know the output explicitly."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a0e384cc",
+ "metadata": {},
+ "source": [
+ "## Solving the forward sensitivity equation\n",
+ "\n",
+ "The sensitivity equations are also solved as an initial value problem. Let us redefine the model in the standard SIR order and we solve it with the sensitivity all set at zero, i.e. we do not wish to infer the initial value of the states."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5b637cee",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "stateList = ['S', 'I', 'R']\n",
+ "\n",
+ "model = DeterministicOde(stateList, paramList, ode=odeList)\n",
+ "\n",
+ "initialState = [1, 1.27e-6, 0]\n",
+ "\n",
+ "paramEval = [('beta', 0.5), ('gamma', 1.0/3.0)]\n",
+ "\n",
+ "model.parameters = paramEval\n",
+ "\n",
+ "solution = scipy.integrate.odeint(model.ode_and_sensitivity, numpy.append(initialState, numpy.zeros(6)), t)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b63ba62",
+ "metadata": {
+ "tags": [
+ "hide-output"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "{\n",
+ " \"tags\": [\n",
+ " \"hide-input\",\n",
+ " ]\n",
+ "}\n",
+ "f,axarr = plt.subplots(3,3);\n",
+ "\n",
+ "f.text(0.5,0.975,'SIR with forward sensitivity solved via ode',fontsize=16,horizontalalignment='center',verticalalignment='top')\n",
+ "\n",
+ "axarr[0,0].plot(t, solution[:,0])\n",
+ "\n",
+ "axarr[0,0].set_title('S')\n",
+ "\n",
+ "axarr[0,1].plot(t, solution[:,1])\n",
+ "\n",
+ "axarr[0,1].set_title('I')\n",
+ "\n",
+ "axarr[0,2].plot(t, solution[:,2]);\n",
+ "\n",
+ "axarr[0,2].set_title('R')\n",
+ "\n",
+ "axarr[1,0].plot(t, solution[:,3])\n",
+ "\n",
+ "axarr[1,0].set_title(r'state S parameter $beta$')\n",
+ "\n",
+ "axarr[2,0].plot(t, solution[:,4])\n",
+ "\n",
+ "axarr[2,0].set_title(r'state S parameter $gamma$')\n",
+ "\n",
+ "axarr[1,1].plot(t, solution[:,5])\n",
+ "\n",
+ "axarr[1,1].set_title(r'state I parameter $beta$')\n",
+ "\n",
+ "axarr[2,1].plot(t, solution[:,6])\n",
+ "\n",
+ "axarr[2,1].set_title(r'state I parameter $gamma$')\n",
+ "\n",
+ "axarr[1,2].plot(t, solution[:,7])\n",
+ "\n",
+ "axarr[1,2].set_title(r'state R parameter $beta$')\n",
+ "\n",
+ "axarr[2,2].plot(t, solution[:,8])\n",
+ "\n",
+ "axarr[2,2].set_title(r'state R parameter $gamma$')\n",
+ "\n",
+ "plt.tight_layout()\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2f64d869",
+ "metadata": {},
+ "source": [
+ "This concludes the introductory example and we will be moving on to look at parameter estimation next in {doc}`estimate1` and the most important part in terms of setting up the ODE object; defining the equations in various different ways in {doc}`transition`."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.9.15 ('sphinx-doc')",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.15"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "4dc1e323c80fe09539c74ad5c5a7c7d8d9ff99e04f7b3dbd3680daf878629d6e"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/notebooks/model_solver.ipynb b/docs/notebooks/model_solver.ipynb
new file mode 100644
index 00000000..2c018cd2
--- /dev/null
+++ b/docs/notebooks/model_solver.ipynb
@@ -0,0 +1,827 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Finding ODE solutions\n",
+ "\n",
+ "PyGOM allows the user to evaluate both the **deterministic** and **stochastic** time evolution of their ODE system using the class methods {func}`solve_determ` and {func}`solve_stochast` respectively.\n",
+ "These methods work with both fixed and random parameters as introduced in the {doc}`previous section <../notebooks/model_params>`.\n",
+ "\n",
+ "We begin by defining the series of ODEs and parameters which define our SIR system.\n",
+ "This we do from scratch rather than loading in a previously defined model in order to present a more comprehensive example of the workflow."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "1679a48a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pygom import SimulateOde, Transition, TransitionType\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import random\n",
+ "\n",
+ "###################\n",
+ "# ODE specification\n",
+ "###################\n",
+ "\n",
+ "# Define SIR model\n",
+ "stateList = ['S', 'I', 'R']\n",
+ "paramList = ['beta', 'gamma', 'N']\n",
+ "transitionList = [Transition(origin='S', destination='I', equation='beta*S*I/N', transition_type=TransitionType.T),\n",
+ " Transition(origin='I', destination='R', equation='gamma*I', transition_type=TransitionType.T)]\n",
+ "\n",
+ "n_pop=1e4 # Total population is fixed\n",
+ "\n",
+ "############\n",
+ "# Parameters\n",
+ "############\n",
+ "\n",
+ "beta_mn=0.35 # Infectivity, beta. Gives the actual value for fixed params and mean for random distribution.\n",
+ "gamma_mn=0.25 # Recovery rate, gamma.\n",
+ "\n",
+ "#######\n",
+ "# Fixed\n",
+ "#######\n",
+ "fixed_param_set=[('beta', beta_mn), ('gamma', gamma_mn), ('N', n_pop)]\n",
+ "\n",
+ "########\n",
+ "# Random\n",
+ "########\n",
+ "\n",
+ "# Recovery rate, gamma\n",
+ "gamma_var=(gamma_mn/10)**2 # Set the standard deviation to be 1/10th of the mean value\n",
+ "gamma_shape=(gamma_mn**2)/gamma_var\n",
+ "gamma_rate=gamma_mn/gamma_var\n",
+ "\n",
+ "# Infectivity parameter, beta\n",
+ "beta_var=(beta_mn/10)**2 # Set the standard deviation to be 1/10th of the mean value\n",
+ "beta_shape=(beta_mn**2)/beta_var\n",
+ "beta_rate=beta_mn/beta_var\n",
+ "\n",
+ "from pygom.utilR import rgamma\n",
+ "random_param_set = dict() # container for random param set\n",
+ "random_param_set['gamma'] = (rgamma,{'shape':gamma_shape, 'rate':gamma_rate})\n",
+ "random_param_set['beta'] = (rgamma,{'shape':beta_shape, 'rate':beta_rate})\n",
+ "random_param_set['N'] = n_pop"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fa186a81",
+ "metadata": {},
+ "source": [
+ "Since this notebook will involve stochastic processes, we set the random number generator seed to make outputs reproducible."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "5dc7996d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "np.random.seed(1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e976c853",
+ "metadata": {},
+ "source": [
+ "In order to determine the time evolution of the system, we must supply initial conditions as well as the desired time points for the numerical solver.\n",
+ "Timesteps should be sufficiently short to reduce numerical integration errors, but not too short such that the computational time costs become too large."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "9e31cc4f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import math\n",
+ "\n",
+ "# Initial conditions\n",
+ "i0=10\n",
+ "x0 = [n_pop-i0, i0, 0]\n",
+ "\n",
+ "# Time range and increments\n",
+ "tmax=200 # maximum time over which to run solver\n",
+ "dt=0.1 # timestep\n",
+ "n_timestep=math.ceil(tmax/dt) # number of iterations\n",
+ "t = np.linspace(0, tmax, n_timestep) # times at which solution will be evaluated"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ecd39e70",
+ "metadata": {},
+ "source": [
+ "## Deterministic evolution\n",
+ "\n",
+ "To solve for the deterministic time evolution of the system, PyGOM uses {func}`scipy.integrate.odeint` which is wrapped by the member function {func}`solve_determ`.\n",
+ "We begin with the simple (and likely familiar) case of fixed parameters.\n",
+ "\n",
+ "### Fixed parameters\n",
+ "\n",
+ "First, we initialise a {class}`SimulateOde` object with our fixed parameters, `fixed_param_set`:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "657a7646",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set up pygom object (D_F suffix implies Deterministic_Fixed)\n",
+ "ode_D_F = SimulateOde(stateList, paramList, transition=transitionList)\n",
+ "ode_D_F.initial_values = (x0, t[0]) # (initial state conditions, initial timepoint)\n",
+ "ode_D_F.parameters=fixed_param_set"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "801dbfd6",
+ "metadata": {},
+ "source": [
+ "The solution is then found via `solve_determ`, specifying the required time steps (not including the initial one)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9f612095",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solution_D_F = ode_D_F.solve_determ(t[1::])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a4d12509",
+ "metadata": {},
+ "source": [
+ "Plotting the output yields the familiar result, where infecteds initially increase in number exponentially until critical depletion of susceptibles results in epidemic decline."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2668b8ad",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "f, axarr = plt.subplots(1,3, layout='constrained', figsize=(10, 2.5))\n",
+ "\n",
+ "# Plot colours\n",
+ "colours=[\"C1\", \"C0\", \"C2\"]\n",
+ "\n",
+ "for i in range(0, 3):\n",
+ " axarr[i].plot(t, solution_D_F[:,i], color=colours[i])\n",
+ "\n",
+ "for idx, state in enumerate(stateList):\n",
+ " axarr[idx].set_ylabel(state, rotation=0)\n",
+ " axarr[idx].set_xlabel('Time')\n",
+ "\n",
+ "axarr[1].set_title(\"Deterministic simulation with fixed parameters\")\n",
+ "\n",
+ "plt.show()\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0e516f09",
+ "metadata": {},
+ "source": [
+ "### Random parameters\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bb53913b",
+ "metadata": {},
+ "source": [
+ "We now solve the same system, but for 1000 repetitions using randomly drawn parameters for each simulation.\n",
+ "This time we initialise the parameters with`random_param_set`, but still use {func}`solve_determ` to find solutions as before"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e7f1ee87",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set up pygom object\n",
+ "ode_D_R = SimulateOde(stateList, paramList, transition=transitionList)\n",
+ "ode_D_R.initial_values = (x0, t[0])\n",
+ "ode_D_R.parameters=random_param_set\n",
+ "\n",
+ "n_param_draws=1000 # number of parameters to draw\n",
+ "Ymean, solution_D_R = ode_D_R.solve_determ(t[1::], n_param_draws, full_output=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2221479d",
+ "metadata": {},
+ "source": [
+ "```{note}\n",
+ "A message may be printed above where PyGOM is trying to connect to an\n",
+ "mpi backend, as our module has the capability to compute in parallel\n",
+ "using the IPython.\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9bb7c942",
+ "metadata": {},
+ "source": [
+ "Here we visualise the output in 2 ways, first by viewing 50 randomly selected trajectories and secondly by viewing the confidence intervals (here 95% and 50%) and median calculated over the full 1000 solutions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8849566b",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "y_D_R=np.dstack(solution_D_R) # unpack the data\n",
+ "\n",
+ "#########################\n",
+ "# Individual trajectories\n",
+ "#########################\n",
+ "\n",
+ "# Select 50 simulations to plot\n",
+ "i_rand=random.sample(range(n_param_draws), 50)\n",
+ "\n",
+ "######################\n",
+ "# Confidence intervals\n",
+ "######################\n",
+ "\n",
+ "# Calculate 95%, 50% CIs and median.\n",
+ "y_D_R_lolo=np.percentile(y_D_R, 2.5, axis=2)\n",
+ "y_D_R_lo=np.percentile(y_D_R, 25, axis=2)\n",
+ "y_D_R_hi=np.percentile(y_D_R, 75, axis=2)\n",
+ "y_D_R_hihi=np.percentile(y_D_R, 97.5, axis=2)\n",
+ "y_D_R_md=np.percentile(y_D_R, 50, axis=2)\n",
+ "\n",
+ "f, axarr = plt.subplots(2,3, layout='constrained', figsize=(10, 5))\n",
+ "\n",
+ "# Plot colours\n",
+ "colours=[\"C1\", \"C0\", \"C2\"]\n",
+ "\n",
+ "for i in range(0,3):\n",
+ " # Plot individual trajectories\n",
+ " for j in i_rand:\n",
+ " axarr[0][i].plot(t, y_D_R[:,i,j], color=colours[i], alpha=0.2)\n",
+ "\n",
+ " # Plot CI's\n",
+ " axarr[1][i].fill_between(t, y_D_R_lolo[:,i], y_D_R_hihi[:,i], alpha=0.2, facecolor=colours[i])\n",
+ " axarr[1][i].fill_between(t, y_D_R_lo[:,i], y_D_R_hi[:,i], alpha=0.4, facecolor=colours[i])\n",
+ " axarr[1][i].plot(t, y_D_R_md[:,i], color=colours[i])\n",
+ "\n",
+ "# Add titles\n",
+ "for idx, state in enumerate(stateList):\n",
+ " axarr[0][idx].set_ylabel(state, rotation=0)\n",
+ " axarr[1][idx].set_ylabel(state, rotation=0)\n",
+ " axarr[0][idx].set_xlabel('Time')\n",
+ " axarr[1][idx].set_xlabel('Time')\n",
+ "\n",
+ "axarr[0][1].set_title(\"50 deterministic simulations, each with randomly drawn parameters\")\n",
+ "axarr[1][1].set_title(\"Median (line), 50% CI (dark shaded) and 95% CI (light shaded) over 1000 simulations\")\n",
+ "\n",
+ "plt.show()\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "39d66f12",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2f01a3d9",
+ "metadata": {},
+ "source": [
+ "## Stochastic evolution\n",
+ "\n",
+ "The approximation that numbers of individuals in each state may be treated as a continuum break down when their sizes are small.\n",
+ "In this regime, transitions between states do not represent continuous flows but are instead stochastic events that occur at rates governed by the current state of the system. The simplifying assumption that waiting times for these events to occur are exponentially distributed (memoryless), allows for quicker evaluation of the dynamics.\n",
+ "\n",
+ "Two common algorithms have been implemented for use during simulation; the reaction method {cite}`Gillespie1977` and the $\\tau$-Leap method\n",
+ "{cite}`Cao2006`.\n",
+ "The two change interactively depending on the size of the states.\n",
+ "\n",
+ "### Fixed parameters\n",
+ "\n",
+ "As previously, we define a model and pass our fixed parameters `fixed_param_set`.\n",
+ "However, this time we employ the function `solve_stochast` to allow for stochastic time evolution:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "118a869b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set up pygom object\n",
+ "ode_S_F = SimulateOde(stateList, paramList, transition=transitionList)\n",
+ "ode_S_F.initial_values = (x0, t[0])\n",
+ "\n",
+ "n_sim=1000 # number of simulations\n",
+ "ode_S_F.parameters = fixed_param_set\n",
+ "solution_S_F, simT = ode_S_F.solve_stochast(t, n_sim, full_output=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "920d3a0b",
+ "metadata": {},
+ "source": [
+ "Before we inspect the epidemic time series, we plot the distribution of final epidemic sizes."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "a3698023",
+ "metadata": {
+ "tags": [
+ "hide-input"
+ ]
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAArkAAAGHCAYAAAC0xkr0AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8fJSN1AAAACXBIWXMAAA9hAAAPYQGoP6dpAAA2DElEQVR4nO3de1yUdf7//+cIOILCeGYgUbDwiKdETbTQTNI0LWvX8mzazXO6uWuamWi7UH7K7JNpq5toa0q7m7ZutiaeqF20zEMes/rmsSTMEMwMFN6/P/wxH0egAIEZLx/32+263Zr39b6u6zXvYeLpm/dcYzPGGAEAAAAWUsXTBQAAAADljZALAAAAyyHkAgAAwHIIuQAAALAcQi4AAAAsh5ALAAAAyyHkAgAAwHIIuQAAALAcQi4AAAAsh5ALWNzy5ctls9mK3bZt21bqc27btq3Ex44YMULh4eGlvkZ5iI+Pl81mK7fzvf3222rZsqX8/f1ls9m0d+/ecr9GcWw2m+Lj46/7PKV57SpaUbWU5efl22+/VXx8vPbu3Vuq44q6ls1m08SJE0t1nl+zaNEiLV++vFD7sWPHZLPZitwH4Pr5eroAAJUjKSlJzZo1K9TeokWLUp/r9ttv1/bt28t0bGUaPXq0evXqVS7nOnPmjIYOHapevXpp0aJFstvtatKkSbleozJ4+2s3a9YsTZ48uVTHfPvtt5ozZ47Cw8PVtm3bCr1WWSxatEh169bViBEj3NpDQkK0fft23XrrrRVeA3AzIuQCN4moqChFR0eXy7mCgoJ0xx13lMu5KlKDBg3UoEGDcjnXF198oUuXLmnIkCGKjY11tQcEBJTbNSqDt792lRH4fvrpJwUEBHg8XNrtdq9+LYAbHcsVALgU/Kn2z3/+s5o0aSK73a4WLVooOTnZrV9xf/Jevny5mjZtKrvdrubNm+vNN98s8jq5ubn64x//qGbNmslut6tevXoaOXKkzpw549YvPDxcffv21Xvvvad27drJ399fzZs313vvvee6XvPmzVW9enV17NhRn376qdvxxS0lWLVqlTp37qwaNWqoRo0aatu2rd54441ix2XEiBHq2rWrJGngwIGy2Wzq1q1bsdcoqHvDhg26/fbb5e/vr2bNmmnZsmVu/c6cOaPx48erRYsWqlGjhurXr6+7775bH330UbG1/JrFixerTZs2qlGjhgIDA9WsWTM9/fTTrv3XvnYFfzIvbrvapk2b1KNHDwUFBSkgIEBdunTR5s2bS1TX559/rl69eikgIEB169bV2LFjdf78+UL9ilpC8Pe//12dOnWSw+FQQECAGjdurMcee8z1fDp06CBJGjlypKvugqUdI0aMUI0aNbR//37FxcUpMDBQPXr0KPZaBX7tPVDcz1bB8qBjx45JuvKzcPDgQaWmprpqK7hmccsV/vOf/6hHjx4KDAxUQECAYmJitH79+iKvs3XrVo0bN05169ZVnTp1NGDAAH377bdFPifgZsNMLnCTyMvL0+XLl93abDabfHx83NrWrVunrVu3au7cuapevboWLVqkRx99VL6+vnr44YeLPf/y5cs1cuRI9e/fXy+99JKysrIUHx+vnJwcVanyf/+ezs/PV//+/fXRRx9p2rRpiomJ0fHjxzV79mx169ZNn376qfz9/V39P/vsM82YMUMzZ86Uw+HQnDlzNGDAAM2YMUObN29WQkKCbDabnnrqKfXt21dHjx51O/5azz77rJ577jkNGDBAU6dOlcPh0IEDB3T8+PFij5k1a5Y6duyoCRMmKCEhQd27d1dQUFCx/Qvqnjp1qqZPn67g4GD95S9/0ahRo3TbbbfprrvukiT98MMPkqTZs2fL6XTqxx9/1Nq1a9WtWzdt3rzZFaRLKjk5WePHj9ekSZP04osvqkqVKvrqq6906NChYo8p+JP51c6cOaMhQ4bolltucbWtXLlSw4YNU//+/bVixQr5+fnpz3/+s+6991598MEHruBYlO+++06xsbHy8/PTokWLFBwcrLfeeqtEa1+3b9+ugQMHauDAgYqPj1e1atV0/PhxbdmyRdKV5RdJSUkaOXKknnnmGfXp00eS3GbXc3Nz1a9fP40ZM0bTp08v9D64VlnfA0VZu3atHn74YTkcDi1atEjSlRnc4qSmpqpnz55q3bq13njjDdntdi1atEj333+/Vq9erYEDB7r1Hz16tPr06aNVq1bp5MmT+sMf/qAhQ4a4xge4qRkAlpaUlGQkFbn5+Pi49ZVk/P39TXp6uqvt8uXLplmzZua2225ztW3dutVIMlu3bjXGGJOXl2dCQ0PN7bffbvLz8139jh07Zvz8/EyjRo1cbatXrzaSzDvvvON27Z07dxpJZtGiRa62Ro0aGX9/f3Pq1ClX2969e40kExISYi5cuOBqf/fdd40ks27dOlfb7NmzzdX/m/v666+Nj4+PGTx4cEmHr9Bz/vvf/+7Wfu01CuquVq2aOX78uKvt4sWLpnbt2mbMmDHFXuPy5cvm0qVLpkePHubBBx902yfJzJ49+xdrnDhxoqlZs2aJnkfBa3etCxcumI4dO5qQkBBz7NgxV1vt2rXN/fff79Y3Ly/PtGnTxnTs2PEXr/nUU08Zm81m9u7d69bes2fPQrUMHz7c7eflxRdfNJLMuXPnij1/wc9OUlJSoX3Dhw83ksyyZcuK3Hf1tYwp+XugqNfdmP97vx09etTV1rJlSxMbG1uo79GjRwvVfccdd5j69eub8+fPu10/KirKNGjQwPX+KrjO+PHj3c45b948I8mcPn260PWAmw3LFYCbxJtvvqmdO3e6bR9//HGhfj169FBwcLDrsY+PjwYOHKivvvpKp06dKvLcR44c0bfffqtBgwa5/Qm3UaNGiomJcev73nvvqWbNmrr//vt1+fJl19a2bVs5nc5CSyDatm3rNqPYvHlzSVK3bt0UEBBQqP2XZmRTUlKUl5enCRMmFNunvLRt21YNGzZ0Pa5WrZqaNGlSqL7XX39dt99+u6pVqyZfX1/5+flp8+bNOnz4cKmv2bFjR507d06PPvqo/vnPf+r7778v1fF5eXkaOHCgDh8+rPfff1+NGjWSJKWlpemHH37Q8OHD3V6z/Px89erVSzt37tSFCxeKPe/WrVvVsmVLtWnTxq190KBBv1pTwVKE3/72t/rb3/6mb775plTPqcBDDz1U4r5leQ+UhwsXLujjjz/Www8/rBo1arhdf+jQoTp16pSOHDnidky/fv3cHrdu3VrSL78PgJsFIRe4STRv3lzR0dFuW/v27Qv1czqdxbadPXu2yHMXtP/SsQW+++47nTt3TlWrVpWfn5/blp6eXiiY1a5d2+1x1apVf7H9559/LrJGSa41v5XxQbE6deoUarPb7bp48aLr8fz58zVu3Dh16tRJ77zzjnbs2KGdO3eqV69ebv1KaujQoVq2bJmOHz+uhx56SPXr11enTp2UkpJSouPHjh2rDRs26B//+IfbXQq+++47SdLDDz9c6DV74YUXZIxxLb0oytmzZ0v0s1GUu+66S++++64uX76sYcOGqUGDBoqKitLq1atL9JykKx8O/LXlJb9W16+9B8pDZmamjDEKCQkptC80NLTI61/7c1awFKIsPz+A1bAmF4Cb9PT0YtuKCm5Xt//SsQUKPiCzYcOGIs8VGBhYqnpLo169epKkU6dOKSwsrMKuU1IrV65Ut27dtHjxYrf2oj6QVVIjR47UyJEjdeHCBX344YeaPXu2+vbtqy+++MI1M1uU+Ph4/eUvf1FSUpLi4uLc9tWtW1eS9OqrrxZ7N4CrZz6vVadOnRL9bBSnf//+6t+/v3JycrRjxw4lJiZq0KBBCg8PV+fOnX/1+NLex7gk74Fq1apJknJyctzW2JZ29vxqtWrVUpUqVXT69OlC+wo+TFbwWgD4dczkAnCzefNm18yddOVP2G+//bZuvfXWYmdAmzZtqpCQEK1evVrGGFf78ePHlZaW5ta3b9++Onv2rPLy8grNLEdHR6tp06YV88QkxcXFycfHp1Co9BSbzVboQ0j79u0r9EGwsqhevbp69+6tmTNnKjc3VwcPHiy27xtvvKE5c+Zo7ty5he7lKkldunRRzZo1dejQoSJfs+joaNdMelG6d++ugwcP6rPPPnNrX7VqVamek91uV2xsrF544QVJ0p49e1ztUvnNXpbkPVBwh4R9+/a5Hfuvf/2ryLpLUlv16tXVqVMnrVmzxq1/fn6+Vq5cqQYNGqhJkyZleUrATYmZXOAmceDAgSI/VX7rrbe6ZjilKzNFd999t2bNmuX6ZPnnn39e6BZKV6tSpYqee+45jR49Wg8++KAef/xxnTt3TvHx8YX+9PvII4/orbfe0n333afJkyerY8eO8vPz06lTp7R161b1799fDz74YPk98auEh4fr6aef1nPPPaeLFy/q0UcflcPh0KFDh/T9999rzpw5FXLd4vTt21fPPfecZs+erdjYWB05ckRz585VRETEr94BoCiPP/64/P391aVLF4WEhCg9PV2JiYlyOByuta3X2r59u8aOHasuXbqoZ8+e2rFjh9v+O+64QzVq1NCrr76q4cOH64cfftDDDz+s+vXr68yZM/rss8905syZX/yHw5QpU7Rs2TL16dNHf/zjH113V/j8889/9Tk9++yzOnXqlHr06KEGDRro3LlzeuWVV+Tn5+e6X/Gtt94qf39/vfXWW2revLlq1Kih0NBQ15/4S6sk74H77rtPtWvX1qhRozR37lz5+vpq+fLlOnnyZKHztWrVSsnJyXr77bfVuHFjVatWTa1atSry2omJierZs6e6d++u3//+96pataoWLVqkAwcOaPXq1ZXy7XqAVRBygZvEyJEji2xfunSpRo8e7Xrcr18/tWzZUs8884xOnDihW2+9VW+99VahWxdda9SoUZKkF154QQMGDHAFytTUVLcPk/n4+GjdunV65ZVX9Ne//lWJiYny9fVVgwYNFBsbW+wv//Iyd+5cRUZG6tVXX9XgwYPl6+uryMhIPfHEExV63aLMnDlTP/30k9544w3NmzdPLVq00Ouvv661a9eW6Wt377zzTi1fvlx/+9vflJmZqbp166pr165688033f4hc7UjR47o8uXL+u9//1vkn/4LZuaHDBmihg0bat68eRozZozOnz+v+vXrq23btkXO/l7N6XQqNTVVkydP1rhx4xQQEKAHH3xQCxcuVP/+/X/x2E6dOunTTz/VU089pTNnzqhmzZqKjo7Wli1b1LJlS0lX1twuW7ZMc+bMUVxcnC5duqTZs2eX+WuQS/IeCAoK0oYNGzRlyhQNGTJENWvW1OjRo9W7d2+395MkzZkzR6dPn9bjjz+u8+fPq1GjRq776F4rNjZWW7Zs0ezZszVixAjl5+erTZs2Wrdunfr27Vum5wPcrGzm6r8tArip2Ww2TZgwQQsXLvR0KQAAXBfW5AIAAMByCLkAAACwHNbkAnBh9RIAwCqYyQUAAIDlEHIBAABgOYRcAAAAWA5rcnXl22S+/fZbBQYGcqNtAAAAL2SM0fnz5xUaGqoqVX59npaQqyvfCe4N32MPAACAX3by5Mliv2b+aoRcSYGBgZKuDFpQUJCHqwEAAMC1srOzFRYW5sptv4aQK7mWKAQFBRFyAQAAvFhJl5bywTMAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAluPRkPvhhx/q/vvvV2hoqGw2m9599123/cYYxcfHKzQ0VP7+/urWrZsOHjzo1icnJ0eTJk1S3bp1Vb16dfXr10+nTp2qxGcBAAAAb+PryYtfuHBBbdq00ciRI/XQQw8V2j9v3jzNnz9fy5cvV5MmTfTHP/5RPXv21JEjRxQYGChJmjJliv71r38pOTlZderU0dSpU9W3b1/t2rVLPj4+lf2UAACAhYVPX19k+7Hn+1RyJfg1Hg25vXv3Vu/evYvcZ4zRggULNHPmTA0YMECStGLFCgUHB2vVqlUaM2aMsrKy9MYbb+ivf/2r7rnnHknSypUrFRYWpk2bNunee++ttOcCAAAA7+G1a3KPHj2q9PR0xcXFudrsdrtiY2OVlpYmSdq1a5cuXbrk1ic0NFRRUVGuPkXJyclRdna22wYAAADr8NqQm56eLkkKDg52aw8ODnbtS09PV9WqVVWrVq1i+xQlMTFRDofDtYWFhZVz9QAAAPAkrw25BWw2m9tjY0yhtmv9Wp8ZM2YoKyvLtZ08ebJcagUAAIB38NqQ63Q6JanQjGxGRoZrdtfpdCo3N1eZmZnF9imK3W5XUFCQ2wYAAADr8NqQGxERIafTqZSUFFdbbm6uUlNTFRMTI0lq3769/Pz83PqcPn1aBw4ccPUBAADAzcejd1f48ccf9dVXX7keHz16VHv37lXt2rXVsGFDTZkyRQkJCYqMjFRkZKQSEhIUEBCgQYMGSZIcDodGjRqlqVOnqk6dOqpdu7Z+//vfq1WrVq67LQAAAODm49GQ++mnn6p79+6ux08++aQkafjw4Vq+fLmmTZumixcvavz48crMzFSnTp20ceNG1z1yJenll1+Wr6+vfvvb3+rixYvq0aOHli9fzj1yAQAAbmI2Y4zxdBGelp2dLYfDoaysLNbnAgCAYvFlEJ5T2rzmtWtyAQAAgLIi5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALMej33gGAADgjYr70gfcOJjJBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5fK0vAAC4qfEVvtbETC4AAAAsh5ALAAAAy2G5AgAAwHUqasnDsef7eKASFGAmFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOb6eLgAAAKAyhE9f7+kSUIm8eib38uXLeuaZZxQRESF/f381btxYc+fOVX5+vquPMUbx8fEKDQ2Vv7+/unXrpoMHD3qwagAAAHiaV4fcF154Qa+//roWLlyow4cPa968efqf//kfvfrqq64+8+bN0/z587Vw4ULt3LlTTqdTPXv21Pnz5z1YOQAAADzJq0Pu9u3b1b9/f/Xp00fh4eF6+OGHFRcXp08//VTSlVncBQsWaObMmRowYICioqK0YsUK/fTTT1q1apWHqwcAAICneHXI7dq1qzZv3qwvvvhCkvTZZ5/pP//5j+677z5J0tGjR5Wenq64uDjXMXa7XbGxsUpLSyv2vDk5OcrOznbbAAAAYB1e/cGzp556SllZWWrWrJl8fHyUl5enP/3pT3r00UclSenp6ZKk4OBgt+OCg4N1/PjxYs+bmJioOXPmVFzhAAAA8Civnsl9++23tXLlSq1atUq7d+/WihUr9OKLL2rFihVu/Ww2m9tjY0yhtqvNmDFDWVlZru3kyZMVUj8AAAA8w6tncv/whz9o+vTpeuSRRyRJrVq10vHjx5WYmKjhw4fL6XRKujKjGxIS4jouIyOj0Ozu1ex2u+x2e8UWDwAAAI/x6pncn376SVWquJfo4+PjuoVYRESEnE6nUlJSXPtzc3OVmpqqmJiYSq0VAAAA3sOrZ3Lvv/9+/elPf1LDhg3VsmVL7dmzR/Pnz9djjz0m6coyhSlTpighIUGRkZGKjIxUQkKCAgICNGjQIA9XDwAAAE/x6pD76quvatasWRo/frwyMjIUGhqqMWPG6Nlnn3X1mTZtmi5evKjx48crMzNTnTp10saNGxUYGOjBygEAAOBJNmOM8XQRnpadnS2Hw6GsrCwFBQV5uhwAAFABvOVrfY8938fTJdyQSpvXvHpNLgAAAFAWhFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlkPIBQAAgOX4eroAAACAm0n49PVFth97vk8lV2JtzOQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcnw9XQAAAEB5C5++3tMlwMOYyQUAAIDleH3I/eabbzRkyBDVqVNHAQEBatu2rXbt2uXab4xRfHy8QkND5e/vr27duungwYMerBgAAACe5tUhNzMzU126dJGfn5/+/e9/69ChQ3rppZdUs2ZNV5958+Zp/vz5WrhwoXbu3Cmn06mePXvq/PnzniscAAAAHuXVa3JfeOEFhYWFKSkpydUWHh7u+m9jjBYsWKCZM2dqwIABkqQVK1YoODhYq1at0pgxY4o8b05OjnJyclyPs7OzK+YJAAAAwCO8eiZ33bp1io6O1m9+8xvVr19f7dq109KlS137jx49qvT0dMXFxbna7Ha7YmNjlZaWVux5ExMT5XA4XFtYWFiFPg8AAABULq8OuV9//bUWL16syMhIffDBBxo7dqyeeOIJvfnmm5Kk9PR0SVJwcLDbccHBwa59RZkxY4aysrJc28mTJyvuSQAAAKDSefVyhfz8fEVHRyshIUGS1K5dOx08eFCLFy/WsGHDXP1sNpvbccaYQm1Xs9vtstvtFVM0AAAAPK5MM7mNGzfW2bNnC7WfO3dOjRs3vu6iCoSEhKhFixZubc2bN9eJEyckSU6nU5IKzdpmZGQUmt0FAADAzaNMIffYsWPKy8sr1J6Tk6Nvvvnmuosq0KVLFx05csSt7YsvvlCjRo0kSREREXI6nUpJSXHtz83NVWpqqmJiYsqtDgAAANxYSrVcYd26da7//uCDD+RwOFyP8/LytHnzZre7H1yv3/3ud4qJiVFCQoJ++9vf6pNPPtGSJUu0ZMkSSVeWKUyZMkUJCQmKjIxUZGSkEhISFBAQoEGDBpVbHQAAALixlCrkPvDAA5KuhMvhw4e77fPz81N4eLheeumlciuuQ4cOWrt2rWbMmKG5c+cqIiJCCxYs0ODBg119pk2bposXL2r8+PHKzMxUp06dtHHjRgUGBpZbHQAAwDvx9b0ojs0YY0p7UEREhHbu3Km6detWRE2VLjs7Ww6HQ1lZWQoKCvJ0OQAAoISsFHKPPd/H0yV4tdLmtTLdXeHo0aNlOQwAAACoFGW+hdjmzZu1efNmZWRkKD8/323fsmXLrrswAAAAoKzKFHLnzJmjuXPnKjo6WiEhIb94T1oAAACgspUp5L7++utavny5hg4dWt71AAAAANetTPfJzc3N5T60AAAA8FplCrmjR4/WqlWryrsWAAAAoFyUabnCzz//rCVLlmjTpk1q3bq1/Pz83PbPnz+/XIoDAAAAyqJMIXffvn1q27atJOnAgQNu+/gQGgAAADytTCF369at5V0HAAAAUG7KtCYXAAAA8GZlmsnt3r37Ly5L2LJlS5kLAgAAAK5XmUJuwXrcApcuXdLevXt14MABDR8+vDzqAgAAAMqsTCH35ZdfLrI9Pj5eP/7443UVBAAAAFyvcl2TO2TIEC1btqw8TwkAAACUWrmG3O3bt6tatWrleUoAAACg1Mq0XGHAgAFuj40xOn36tD799FPNmjWrXAoDAAAAyqpMIdfhcLg9rlKlipo2baq5c+cqLi6uXAoDAAAAyqpMITcpKam86wAAAADKTZlCboFdu3bp8OHDstlsatGihdq1a1dedQEAAABlVqaQm5GRoUceeUTbtm1TzZo1ZYxRVlaWunfvruTkZNWrV6+86wQAAABKrEx3V5g0aZKys7N18OBB/fDDD8rMzNSBAweUnZ2tJ554orxrBAAAAEqlTDO5GzZs0KZNm9S8eXNXW4sWLfTaa6/xwTMAAAB4XJlmcvPz8+Xn51eo3c/PT/n5+dddFAAAAHA9yhRy7777bk2ePFnffvutq+2bb77R7373O/Xo0aPcigMAAADKokzLFRYuXKj+/fsrPDxcYWFhstlsOnHihFq1aqWVK1eWd40AAACWFz59faG2Y8/38UAl1lCmkBsWFqbdu3crJSVFn3/+uYwxatGihe65557yrg8AAAAotVItV9iyZYtatGih7OxsSVLPnj01adIkPfHEE+rQoYNatmypjz76qEIKBQAAAEqqVDO5CxYs0OOPP66goKBC+xwOh8aMGaP58+frzjvvLLcCAQAAivpTPvBLSjWT+9lnn6lXr17F7o+Li9OuXbuuuygAAADgepQq5H733XdF3jqsgK+vr86cOXPdRQEAAADXo1Qh95ZbbtH+/fuL3b9v3z6FhIRcd1EAAADA9ShVyL3vvvv07LPP6ueffy607+LFi5o9e7b69u1bbsUBAAAAZVGqD54988wzWrNmjZo0aaKJEyeqadOmstlsOnz4sF577TXl5eVp5syZFVUrAAAAUCKlCrnBwcFKS0vTuHHjNGPGDBljJEk2m0333nuvFi1apODg4AopFAAAACipUn8ZRKNGjfT+++8rMzNTX331lYwxioyMVK1atSqiPgAAAKDUyvSNZ5JUq1YtdejQoTxrAQAAAMpFqT54BgAAANwICLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALOeGCrmJiYmy2WyaMmWKq80Yo/j4eIWGhsrf31/dunXTwYMHPVckAAAAPO6GCbk7d+7UkiVL1Lp1a7f2efPmaf78+Vq4cKF27twpp9Opnj176vz58x6qFAAAAJ52Q4TcH3/8UYMHD9bSpUtVq1YtV7sxRgsWLNDMmTM1YMAARUVFacWKFfrpp5+0atUqD1YMAAAAT7ohQu6ECRPUp08f3XPPPW7tR48eVXp6uuLi4lxtdrtdsbGxSktLK/Z8OTk5ys7OdtsAAABgHb6eLuDXJCcna/fu3dq5c2ehfenp6ZKk4OBgt/bg4GAdP3682HMmJiZqzpw55VsoAAAoF+HT13u6BFiAV8/knjx5UpMnT9bKlStVrVq1YvvZbDa3x8aYQm1XmzFjhrKyslzbyZMny61mAAAAeJ5Xz+Tu2rVLGRkZat++vastLy9PH374oRYuXKgjR45IujKjGxIS4uqTkZFRaHb3ana7XXa7veIKBwAAgEd59Uxujx49tH//fu3du9e1RUdHa/Dgwdq7d68aN24sp9OplJQU1zG5ublKTU1VTEyMBysHAACAJ3n1TG5gYKCioqLc2qpXr646deq42qdMmaKEhARFRkYqMjJSCQkJCggI0KBBgzxRMgAAALyAV4fckpg2bZouXryo8ePHKzMzU506ddLGjRsVGBjo6dIAAADgITZjjPF0EZ6WnZ0th8OhrKwsBQUFebocAABuatxd4f8ce76Pp0vwGqXNa169JhcAAAAoC0IuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMsh5AIAAMByCLkAAACwHEIuAAAALIeQCwAAAMvx9XQBAAAAKFr49PVFth97vk8lV3LjYSYXAAAAlkPIBQAAgOUQcgEAAGA5hFwAAABYDiEXAAAAlsPdFTyET0sCAABUHGZyAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACW4+vpAgAAwM0pfPp6T5cAC2MmFwAAAJbj1SE3MTFRHTp0UGBgoOrXr68HHnhAR44ccetjjFF8fLxCQ0Pl7++vbt266eDBgx6qGAAAAN7Aq0NuamqqJkyYoB07diglJUWXL19WXFycLly44Oozb948zZ8/XwsXLtTOnTvldDrVs2dPnT9/3oOVAwAAwJO8ek3uhg0b3B4nJSWpfv362rVrl+666y4ZY7RgwQLNnDlTAwYMkCStWLFCwcHBWrVqlcaMGeOJsgEAAOBhXj2Te62srCxJUu3atSVJR48eVXp6uuLi4lx97Ha7YmNjlZaWVux5cnJylJ2d7bYBAADAOm6YkGuM0ZNPPqmuXbsqKipKkpSeni5JCg4OdusbHBzs2leUxMREORwO1xYWFlZxhQMAAKDS3TAhd+LEidq3b59Wr15daJ/NZnN7bIwp1Ha1GTNmKCsry7WdPHmy3OsFAACA53j1mtwCkyZN0rp16/Thhx+qQYMGrnan0ynpyoxuSEiIqz0jI6PQ7O7V7Ha77HZ7xRUMAAAAj/LqmVxjjCZOnKg1a9Zoy5YtioiIcNsfEREhp9OplJQUV1tubq5SU1MVExNT2eUCAADAS3j1TO6ECRO0atUq/fOf/1RgYKBrna3D4ZC/v79sNpumTJmihIQERUZGKjIyUgkJCQoICNCgQYM8XD0AAAA8xatD7uLFiyVJ3bp1c2tPSkrSiBEjJEnTpk3TxYsXNX78eGVmZqpTp07auHGjAgMDK7laAAAAeAuvDrnGmF/tY7PZFB8fr/j4+IovCAAAADcEr16TCwAAAJQFIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDm+ni4AAABYW/j09Z4uwXKKG9Njz/ep5Eq8FzO5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHr/UFAAClxlf1eqeiXpeb9at+mckFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDmEXAAAAFgOIRcAAACWQ8gFAACA5RByAQAAYDm+ni4AAAB4r/Dp6z1dAipIUa/tsef7eKCSisFMLgAAACyHkAsAAADLYbmCl7H6nw4AAEDlKo8lJ8Wdw5szCjO5AAAAsBzLhNxFixYpIiJC1apVU/v27fXRRx95uiQAAAB4iCWWK7z99tuaMmWKFi1apC5duujPf/6zevfurUOHDqlhw4aeLg8AAI9hGRxKw0p307DETO78+fM1atQojR49Ws2bN9eCBQsUFhamxYsXe7o0AAAAeMANP5Obm5urXbt2afr06W7tcXFxSktLK/KYnJwc5eTkuB5nZWVJkrKzsyuu0Gvk5/xU4r6VWRcAwFqK+n1Tmt8rpfl9hZtPZWaUgmsZY0rU/4YPud9//73y8vIUHBzs1h4cHKz09PQij0lMTNScOXMKtYeFhVVIjdfLscDTFQAArITfKygvnvhZOn/+vBwOx6/2u+FDbgGbzeb22BhTqK3AjBkz9OSTT7oe5+fn64cfflCdOnWKPaa8ZGdnKywsTCdPnlRQUFCFXguFMf6exfh7FuPvWYy/ZzH+nlUe42+M0fnz5xUaGlqi/jd8yK1bt658fHwKzdpmZGQUmt0tYLfbZbfb3dpq1qxZUSUWKSgoiDeZBzH+nsX4exbj71mMv2cx/p51veNfkhncAjf8B8+qVq2q9u3bKyUlxa09JSVFMTExHqoKAAAAnnTDz+RK0pNPPqmhQ4cqOjpanTt31pIlS3TixAmNHTvW06UBAADAAywRcgcOHKizZ89q7ty5On36tKKiovT++++rUaNGni6tELvdrtmzZxdaLoHKwfh7FuPvWYy/ZzH+nsX4e5Ynxt9mSnofBgAAAOAGccOvyQUAAACuRcgFAACA5RByAQAAYDmEXAAAAFgOIbeSLVq0SBEREapWrZrat2+vjz76yNMl3XA+/PBD3X///QoNDZXNZtO7777rtt8Yo/j4eIWGhsrf31/dunXTwYMH3frk5ORo0qRJqlu3rqpXr65+/frp1KlTbn0yMzM1dOhQORwOORwODR06VOfOnavgZ+fdEhMT1aFDBwUGBqp+/fp64IEHdOTIEbc+jH/FWrx4sVq3bu26oXrnzp3173//27Wf8a88iYmJstlsmjJliquN8a9Y8fHxstlsbpvT6XTtZ/wr3jfffKMhQ4aoTp06CggIUNu2bbVr1y7Xfq96DQwqTXJysvHz8zNLly41hw4dMpMnTzbVq1c3x48f93RpN5T333/fzJw507zzzjtGklm7dq3b/ueff94EBgaad955x+zfv98MHDjQhISEmOzsbFefsWPHmltuucWkpKSY3bt3m+7du5s2bdqYy5cvu/r06tXLREVFmbS0NJOWlmaioqJM3759K+tpeqV7773XJCUlmQMHDpi9e/eaPn36mIYNG5off/zR1Yfxr1jr1q0z69evN0eOHDFHjhwxTz/9tPHz8zMHDhwwxjD+leWTTz4x4eHhpnXr1mby5Mmudsa/Ys2ePdu0bNnSnD592rVlZGS49jP+FeuHH34wjRo1MiNGjDAff/yxOXr0qNm0aZP56quvXH286TUg5Faijh07mrFjx7q1NWvWzEyfPt1DFd34rg25+fn5xul0mueff97V9vPPPxuHw2Fef/11Y4wx586dM35+fiY5OdnV55tvvjFVqlQxGzZsMMYYc+jQISPJ7Nixw9Vn+/btRpL5/PPPK/hZ3TgyMjKMJJOammqMYfw9pVatWuYvf/kL419Jzp8/byIjI01KSoqJjY11hVzGv+LNnj3btGnTpsh9jH/Fe+qpp0zXrl2L3e9trwHLFSpJbm6udu3apbi4OLf2uLg4paWleagq6zl69KjS09Pdxtlutys2NtY1zrt27dKlS5fc+oSGhioqKsrVZ/v27XI4HOrUqZOrzx133CGHw8HrdZWsrCxJUu3atSUx/pUtLy9PycnJunDhgjp37sz4V5IJEyaoT58+uueee9zaGf/K8eWXXyo0NFQRERF65JFH9PXXX0ti/CvDunXrFB0drd/85jeqX7++2rVrp6VLl7r2e9trQMitJN9//73y8vIUHBzs1h4cHKz09HQPVWU9BWP5S+Ocnp6uqlWrqlatWr/Yp379+oXOX79+fV6v/58xRk8++aS6du2qqKgoSYx/Zdm/f79q1Kghu92usWPHau3atWrRogXjXwmSk5O1e/duJSYmFtrH+Fe8Tp066c0339QHH3ygpUuXKj09XTExMTp79izjXwm+/vprLV68WJGRkfrggw80duxYPfHEE3rzzTcled97wBJf63sjsdlsbo+NMYXacP3KMs7X9imqP6/X/5k4caL27dun//znP4X2Mf4Vq2nTptq7d6/OnTund955R8OHD1dqaqprP+NfMU6ePKnJkydr48aNqlatWrH9GP+K07t3b9d/t2rVSp07d9att96qFStW6I477pDE+Fek/Px8RUdHKyEhQZLUrl07HTx4UIsXL9awYcNc/bzlNWAmt5LUrVtXPj4+hf4FkpGRUehfPCi7gk/Z/tI4O51O5ebmKjMz8xf7fPfdd4XOf+bMGV4vSZMmTdK6deu0detWNWjQwNXO+FeOqlWr6rbbblN0dLQSExPVpk0bvfLKK4x/Bdu1a5cyMjLUvn17+fr6ytfXV6mpqfrf//1f+fr6usaG8a881atXV6tWrfTll1/y818JQkJC1KJFC7e25s2b68SJE5K873cAIbeSVK1aVe3bt1dKSopbe0pKimJiYjxUlfVERETI6XS6jXNubq5SU1Nd49y+fXv5+fm59Tl9+rQOHDjg6tO5c2dlZWXpk08+cfX5+OOPlZWVdVO/XsYYTZw4UWvWrNGWLVsUERHhtp/x9wxjjHJychj/CtajRw/t379fe/fudW3R0dEaPHiw9u7dq8aNGzP+lSwnJ0eHDx9WSEgIP/+VoEuXLoVuG/nFF1+oUaNGkrzwd0CJP6KG61ZwC7E33njDHDp0yEyZMsVUr17dHDt2zNOl3VDOnz9v9uzZY/bs2WMkmfnz55s9e/a4bsX2/PPPG4fDYdasWWP2799vHn300SJvX9KgQQOzadMms3v3bnP33XcXefuS1q1bm+3bt5vt27ebVq1a3fS3kBk3bpxxOBxm27Ztbrfw+emnn1x9GP+KNWPGDPPhhx+ao0ePmn379pmnn37aVKlSxWzcuNEYw/hXtqvvrmAM41/Rpk6darZt22a+/vprs2PHDtO3b18TGBjo+j3K+FesTz75xPj6+po//elP5ssvvzRvvfWWCQgIMCtXrnT18abXgJBbyV577TXTqFEjU7VqVXP77be7br2Ektu6dauRVGgbPny4MebKLUxmz55tnE6nsdvt5q677jL79+93O8fFixfNxIkTTe3atY2/v7/p27evOXHihFufs2fPmsGDB5vAwEATGBhoBg8ebDIzMyvpWXqnosZdkklKSnL1Yfwr1mOPPeb6f0i9evVMjx49XAHXGMa/sl0bchn/ilVwz1U/Pz8TGhpqBgwYYA4ePOjaz/hXvH/9618mKirK2O1206xZM7NkyRK3/d70GtiMMabk874AAACA92NNLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgAAACyHkAsAAADLIeQCAADAcgi5AAAAsBxCLgB4SLdu3TRlypRyPWd8fLzatm1bpmOPHTsmm82mvXv3lmtNAOAJvp4uAACsbMSIEVqxYkWh9i+//FJr1qyRn5+fB6oqWlhYmE6fPq26det6uhQAuG6EXACoYL169VJSUpJbW7169eTj4+Ohiorm4+Mjp9Pp6TIAoFywXAEAKpjdbpfT6XTbfHx8Ci1XCA8PV0JCgh577DEFBgaqYcOGWrJkidu5nnrqKTVp0kQBAQFq3LixZs2apUuXLpW4lszMTA0ePFj16tWTv7+/IiMjXQH82uUKI0aMkM1mK7Rt27ZNkpSbm6tp06bplltuUfXq1dWpUyfXPgDwNEIuAHiRl156SdHR0dqzZ4/Gjx+vcePG6fPPP3ftDwwM1PLly3Xo0CG98sorWrp0qV5++eUSn3/WrFk6dOiQ/v3vf+vw4cNavHhxscsTXnnlFZ0+fdq1TZ48WfXr11ezZs0kSSNHjtR///tfJScna9++ffrNb36jXr166csvv7y+QQCAcsByBQCoYO+9955q1Kjhety7d2/9/e9/L7Lvfffdp/Hjx0u6Mmv78ssva9u2ba5g+cwzz7j6hoeHa+rUqXr77bc1bdq0EtVy4sQJtWvXTtHR0a5zFMfhcMjhcEiS1qxZo9dff12bNm2S0+nU//t//0+rV6/WqVOnFBoaKkn6/e9/rw0bNigpKUkJCQklqgcAKgohFwAqWPfu3bV48WLX4+rVqxfbt3Xr1q7/ttlscjqdysjIcLX94x//0IIFC/TVV1/pxx9/1OXLlxUUFFTiWsaNG6eHHnpIu3fvVlxcnB544AHFxMT84jF79uzRsGHD9Nprr6lr166SpN27d8sYoyZNmrj1zcnJUZ06dUpcDwBUFEIuAFSw6tWr67bbbitR32vvtmCz2ZSfny9J2rFjhx555BHNmTNH9957rxwOh5KTk/XSSy+VuJbevXvr+PHjWr9+vTZt2qQePXpowoQJevHFF4vsn56ern79+mnUqFEaNWqUqz0/P18+Pj7atWtXoQ/QXT1rDQCeQsgFgBvEf//7XzVq1EgzZ850tR0/frzU56lXr55GjBihESNG6M4779Qf/vCHIkPuzz//rP79+6tZs2aaP3++27527dopLy9PGRkZuvPOO0v/ZACgghFyAeAGcdttt+nEiRNKTk5Whw4dtH79eq1du7ZU53j22WfVvn17tWzZUjk5OXrvvffUvHnzIvuOGTNGJ0+e1ObNm3XmzBlXe+3atdWkSRMNHjxYw4YN00svvaR27drp+++/15YtW9SqVSvdd9991/VcAeB6cXcFALhB9O/fX7/73e80ceJEtW3bVmlpaZo1a1apzlG1alXNmDFDrVu31l133SUfHx8lJycX2Tc1NVWnT59WixYtFBIS4trS0tIkSUlJSRo2bJimTp2qpk2bql+/fvr4448VFhZ23c8VAK6XzRhjPF0EAAAAUJ6YyQUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWA4hFwAAAJZDyAUAAIDlEHIBAABgOYRcAAAAWM7/B4YpDqNG2d+zAAAAAElFTkSuQmCC",
+ "text/plain": [
+ "