Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/TuragaLab/DECODE
Browse files Browse the repository at this point in the history
  • Loading branch information
ASpeiser committed Mar 29, 2021
2 parents 0dc1553 + 853136c commit 05fd075
Show file tree
Hide file tree
Showing 6 changed files with 54 additions and 54 deletions.
10 changes: 5 additions & 5 deletions docs/source/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ Data
============

We provide experimental data for you to try out DECODE. If you want to go through the whole
pipeline, i.e. including your own bead calibration and training parametrization
(i.e. :ref:`bead calibration and prefit; steps 1 and 2 <Workflow>`) you can find the URLs to
pipeline, i.e., including your own bead calibration and training parametrization
(i.e., :ref:`bead calibration and prefit; steps 1 and 2 <Workflow>`) you can find the URLs to
download example data from our
`gateway <https://github.com/TuragaLab/DECODE/blob/master/gateway.yaml>`__.
If you want to omit these steps and try out DECODE directly, the
Expand All @@ -31,7 +31,7 @@ If you want to fit your own data, there are few small points you need to be awar
Experimental data
=================

We provide the RAW data, RAW beads, training parametrization and converged model to reproduce
We provide the raw data, raw beads, training parametrization and converged model to reproduce
Figure 4 of our preprint. The notebooks automatically download this package.
For manual download the link can be found in our
`gateway <https://github.com/TuragaLab/DECODE/blob/master/gateway.yaml>`__
Expand Down Expand Up @@ -67,12 +67,12 @@ Camera Parameters
+---------------------+-------------+-------------+
| spur_noise | 0.002 | 0.002 |
+---------------------+-------------+-------------+
| px_size | [100, 100] | [100, 00] |
| px_size | [100, 100] | [100, 100] |
+---------------------+-------------+-------------+

:sup:`†` we typically use a *quantum efficiency* of 1. and refer to the photons as *detected
photons.*
For direct challenge comparison, the photon count must then be adjusted by 1/ 0.9 (where 0.9 is the
For direct challenge comparison, the photon count then has to be adjusted by 1/ 0.9 (where 0.9 is the
quantum efficiency of the camera for the simulated 3D AS/DH data).

Moreover, for this data *Mirroring must be turned off* both in SMAP (Camera Parameters) as well
Expand Down
18 changes: 9 additions & 9 deletions docs/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,22 +33,22 @@ Errors and Software Issues

This might happen if your GPU is

1. Doing multiple things, i.e. used not only for computation but also for
1. doing multiple things, i.e., used not only for computation but also for
the display
2. old or has to little memory

If you have multiple GPU devices you may set: ``device='cuda:1'`` (where
``1`` corresponds to the respective index of the device, starting with 0). If
you don't have multiple devices, you may should try to reduce the batch size:
you don't have multiple devices, you should try to reduce the batch size:
``param.HyperParameter.batch_size``.


- I get other CUDA errors, e.g. ``CUDA error: unspecified launch failure``
- I get other CUDA errors, e.g., ``CUDA error: unspecified launch failure``

Please check whether you have a somewhat up to date CUDA driver. It's always
a good idea to update it. Moreover you can try to check which cudatoolkit
a good idea to update it. Moreover, you can try to check which cudatoolkit
version was installed by checking in the Terminal / Anaconda prompt
``conda list``
``conda list``.

You can also try to pin the cudatoolkit version to another one by setting
`cudatoolkit=10.1` instead of plain `cudatoolkit` without version
Expand All @@ -75,12 +75,12 @@ Errors and Software Issues

This can happen particularly often for Windows and there is no 'one answer'.
You might want to decrease the number of CPU workers or disable
multiprocessing at all. For this you would start the training with changed
number of workers workers by adding ``-w [number of workers]`` at the end of
multiprocessing at all. For this you would start the training with a changed
number of workers by adding ``-w [number of workers]`` at the end of
the python command. Specify ``-w 0`` for disabling multiprocessing if even 2
lead to an error. Alternatively change the ``.yaml`` file here ``param ->
lead to an error. Alternatively change the ``.yaml`` file here: ``param ->
Hardware -> num_worker_train``. Note that this can slow down training. You
can also try changing the multiprocessing strategy, which you can do in the
.yaml file. ``param -> Hardware -> torch_multiprocessing_sharing_strategy``.
.yaml file: ``param -> Hardware -> torch_multiprocessing_sharing_strategy``.
The sharing strategies depend on your system. Please have a look at `Pytorch
Multiprocessing <https://pytorch.org/docs/stable/multiprocessing.html>`__.
13 changes: 5 additions & 8 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ To try out DECODE we recommend to first have a look at the Google Colab notebook

DECODE on Google Colab
""""""""""""""""""""""
Our notebooks below comprise training a model, fitting experimental data and exporting the
fitted localizations.
Our notebooks below comprise training a model, fitting experimental data and exporting the fitted localizations.

* `Training a DECODE model <https://colab.research.google.com/drive/1uQ7w1zaqpy9EIjUdaLyte99FJIhJ6N8E?usp=sharing>`_
* `Fitting high-density data <https://colab.research.google.com/drive/1HAvJUL29vVuCHMZHMbU9jxd4fbLIPdhZ?usp=sharing>`_
Expand All @@ -21,15 +20,13 @@ DECODE on your machine
The installation is described in detail here `installation instructions. <installation.html>`__

Once you have installed DECODE on your local machine, please follow our
`Tutorial. <installation.html>`__
`Tutorial. <tutorial.html>`__

Video tutorial
###############
As part of the virtual `I2K 2020
<https://www.janelia.org/you-janelia/conferences/from-images-to-knowledge-with-imagej-friends>`__
conference we organized a workshop on DECODE. Please find the video below.
*DECODE is being actively developed, therefore the exact commands might differ
from those shown in the video.*
As part of the virtual `I2K 2020 <https://www.janelia.org/you-janelia/conferences/from-images-to-knowledge-with-imagej-friends>`__ conference we organized a workshop on DECODE. Please find the video below.

*DECODE is being actively developed, therefore the exact commands might differ from those shown in the video.*

.. raw:: html

Expand Down
6 changes: 3 additions & 3 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Installation
============

For regular use, of course we recommend to install and use the framework on your
For regular use, we advise you to install and use the framework on your
local machine. We strongly recommend using a machine with a modern GPU, e.g. an
RTX 2080, in particular for training. To make use of your GPU it requires a CUDA
capability of 3.7 or higher (see here to check if your GPU is valid:
Expand Down Expand Up @@ -49,14 +49,14 @@ Depending on whether you have a CUDA capable GPU type:
# after previous command (all platforms)
conda activate decode_env
Please now get the DECODE Jupyter Notebooks
Please now get the DECODE Jupyter Notebooks.

.. _notebook_install:

DECODE Jupyter Notebooks
""""""""""""""""""""""""

Before you start using DECODE locally you should make sure to check get our Jupyter notebooks
Before you start using DECODE locally, you should make sure to check out our Jupyter notebooks
to familiarise yourself with DECODE.
You can get the notebooks by specifying the directory where you want the notebooks to be saved following this
command in your Terminal/Anaconda Prompt:
Expand Down
8 changes: 4 additions & 4 deletions docs/source/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Logging
Currently we support monitoring the training progress in Tensorboard while basic
metrics are reported to the console as well. All metrics that include comparison
to ground truth emitters are based on the parameters (implicitly) provided in
the configuration .yaml file. Those include match dimensionality (i.e. in 2D or
3D), max. allowed distances. The threshold on the detection filters the
detections before matching.
the configuration .yaml file. Those include match dimensionality (i.e., in 2D or
3D) and max. allowed distances. The threshold on the detection filters the
detections prior to matching.

Tensorboard
-----------
Expand All @@ -17,7 +17,7 @@ Metrics
+----------------+-------------------------+------------------------------------------------------------------+
| Abbreviation | Name | Description |
+================+=========================+==================================================================+
| pred | Precision | Number of true positives over all detections |
| prec | Precision | Number of true positives over all detections |
+----------------+-------------------------+------------------------------------------------------------------+
| rec | Recall | Number of true positives over all (ground truth) localizations |
+----------------+-------------------------+------------------------------------------------------------------+
Expand Down
53 changes: 28 additions & 25 deletions docs/source/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
Tutorial
========

Here we describe how to use DECODE locally, i.e. when you want to use it on a regular basis.
Here we describe how to use DECODE locally, i.e., when you want to use it on a regular basis.
If you want to test DECODE without installation you can check out the Google Colab notebooks
linked on the `starting page <index.html#google-colab-notebooks>`__ of this documentation.
linked on the `starting page <index.html#decode-on-google-colab>`__ of this documentation.

**Note:** This tutorial assumes that you have successfully installed DECODE locally and got your
copy of the DECODE jupyter notebooks. If this is not the case for you, please refer to the
copy of the DECODE Jupyter notebooks. If this is not the case for you, please refer to the
`installation instructions <installation.html>`__ and follow the step-by-step guide.


Expand All @@ -20,14 +20,14 @@ Workflow
A typical workflow for fitting high-density SMLM data with this package is

1. :ref:`Bead calibration <Bead calibration>` and extraction of spline coefficients (e.g. in SMAP)
2. :ref:`Set training parameters <Training parameters>` by a pre-fitting procedure or reasonableguess.
2. :ref:`Determine training parameters <Training parameters>` by a pre-fitting procedure or reasonableguess.
3. :ref:`Training a DECODE model <Training>`
4. :ref:`Fitting experimental data <Fit>`
5. :ref:`Export, visualization and analysis <Visualization>` of fitted data

The first two steps involving SMAP can be skipped and you can start right away
with the :ref:`notebooks <First time>` in case you want to work with our
example data, as we provide the intermediate result files (i.e. the calibration and the training
example data, as we provide the intermediate result files (i.e., the calibration and the training
parametrization). If you are working with your own data or want to go through the whole workflow,
just start from the beginning.
You can find an overview of our data in `Data <data.html>`__.
Expand All @@ -38,12 +38,13 @@ You can find an overview of our data in `Data <data.html>`__.
Bead calibration with SMAP
==========================

1. Install the stand-alone version of SMAP from
1. Install the stand-alone version of SMAP from the software section on
`rieslab.de <https://rieslab.de/#software>`__ or if you have MATLAB, download
the source-code from `GitHub.com/jries/SMAP <https://github.com/jries/SMAP>`__.
There, you also find the installation instructions and the documentation.
On `rieslab.de <https://rieslab.de/#software>`__, you can also find the
installation instructions and the documentation.
2. Acquire z-stacks with fluorescent beads (e.g. 100 nm beads). We typically use
a z-range of +/- 750 nm and a step size of 10-50 nm.
a z-range of +/- 1000 nm and a step size of 10-50 nm.
3. In SMAP, use the plugin *Analyze / calibrate3DSplinePSF* to generate the
calibration file. The plugin can be found either via tabs *Analyze / sr3D /
calibrate3DsplinePSF* or menu *Plugins / Analyze / sr3D / calibrate3DsplinePSF*.
Expand All @@ -53,7 +54,7 @@ Bead calibration with SMAP
<https://www.embl.de/download/ries/Documentation/Example_SMAP_Step_by_step.pdf#page=2>`__,
and in the original publication `Li et al., Nature Methods (2018)
<https://doi.org/10.1038/nmeth.4661>`__. Even for two-dimensional data you
need a bead calibration, in this case make sure to make the *bi directional
need a bead calibration, in this case make sure to perform the *bidirectional
fit*.


Expand All @@ -64,7 +65,9 @@ Determine training parameters with SMAP

1. Use the bead calibration to fit your SMLM data. Detailed instructions can be
found in the `SMAP user guide
<https://www.embl.de/download/ries/Documentation/SMAP_UserGuide.pdf#page=6>`__.
<https://www.embl.de/download/ries/Documentation/SMAP_UserGuide.pdf#page=6>`__
in section 5, more specifically in section 5.4 for fitting with an
experimental PSF.
2. Use the plugin: *Plugins / calibrate / DECODE\_training\_estimates* to estimate
the photo-physical
parameters of the experiment and to save them into a parameter file. Consult the
Expand All @@ -77,19 +80,19 @@ Training a DECODE model
=======================

The basis for training DECODE is a parametrization of training procedure. This parametrization is
described in a simple `.yaml` file which holds a couple of paths (e.g. the calibration file and
your output directory) as well as the parametrization of the simulation that should somewhat
described in a simple `.yaml` file which contains a couple of paths (e.g., the calibration file and
your output directory) as well as the parametrization of the simulation which should
match the data you want to fit.

In our Training notebook we guide you through the process of creating such a `.yaml` file that can
subsequently used to start the actual training.
subsequently be used to start the actual training.

If you have gone through the notebooks already and generated your own `param.yaml` file, you can skip
the following section and go to the :ref:`regular workflow <Regular workflow>` directly.

.. _First time:

First time Using DECODE
First time using DECODE
-----------------------

To get you up and running, we provide several notebooks that introduce DECODE to you.
Expand All @@ -100,7 +103,7 @@ In total, there are four different notebooks:
- **Fitting** localizes the single molecules in the high-density data based on the model.
- **Evaluation** gives you an introduction to the post-processing capabilities of DECODE.

To start going through the notebooks execute the following command in your Terminal/Anaconda Prompt:
To start going through the notebooks, execute the following command in your Terminal/Anaconda Prompt:

.. code:: bash
Expand All @@ -116,7 +119,7 @@ Training and Fitting.
Regular workflow
----------------

In practice you can either write such a `.yaml` file directly, i.e. by educated guessing your
In practice, you can either write such a `.yaml` file directly, i.e., by educated guessing your
emitter characteristics, or follow the pre-fit routine using SMAP that will auto-generate it.

Once being equipped with your calibration and the parameter file, you can start the training in
Expand All @@ -128,8 +131,8 @@ your Terminal/Anaconda prompt
python -m decode.neuralfitter.train.live_engine -p [path to your param].yaml
To monitor the training progress you can open up a new Terminal window/Anaconda prompt, navigate
to the respective folder from before and start tensorboard. This optional and does not have an
To monitor the training progress, you can open up a new Terminal window/Anaconda prompt, navigate
to the respective folder from before, and start Tensorboard. This is optional and does not have any
influence on the training. Note that Tensorboard can be quite slow sometimes.

.. code:: bash
Expand All @@ -139,12 +142,12 @@ influence on the training. Note that Tensorboard can be quite slow sometimes.
.. _Fit:
.. _Fitting:

Fit
===
Fitting
=======

Please refer to the Fit notebook which is described above in
Please refer to the Fitting notebook which is described above in
:ref:`First Time using DECODE instructions. <First time>`


Expand All @@ -157,11 +160,11 @@ DECODE has basic rendering functions but for detailed visualization and analysis
your data and load it into SMAP or another SMLM visualization software of your choice.

For loading the data in SMAP, you can export your emitter set as h5 file at the end of the fitting notebook.
For easier input in other software we recommend exporting as csv.
For easier input in other software, we recommend exporting as csv.
Under the *File* tab in SMAP, change the *auto loader* to *Import DECODE .csv/.h5* and **Load** the exported data.
For detailed instructions on post-processing (grouping, filtering, drift correction,...)
please consult the `SMAP Documentation <https://www.embl.de/download/ries/Documentation/>`__,
more specifically from point 5 onwards in the
more specifically from section 5 onwards in the
`Getting Started Guide <https://www.embl.de/download/ries/Documentation/Getting_Started.pdf#page=4>`__
and from point 6 on in the
and from section 6 on in the
`SMAP User Guide <https://www.embl.de/download/ries/Documentation/SMAP_UserGuide.pdf#page=11>`__.

0 comments on commit 05fd075

Please sign in to comment.