From a9dbf33dd2985c98ddae5a41ad2db25fddd0265d Mon Sep 17 00:00:00 2001 From: phoess <1313754+phoess@users.noreply.github.com> Date: Sun, 28 Mar 2021 21:53:17 +0200 Subject: [PATCH] Polishing the documentations --- docs/source/data.rst | 10 +++---- docs/source/faq.rst | 18 ++++++------ docs/source/index.rst | 13 ++++----- docs/source/installation.rst | 6 ++-- docs/source/logging.rst | 8 +++--- docs/source/tutorial.rst | 53 +++++++++++++++++++----------------- 6 files changed, 54 insertions(+), 54 deletions(-) diff --git a/docs/source/data.rst b/docs/source/data.rst index 92ce3a7f..d36b887d 100644 --- a/docs/source/data.rst +++ b/docs/source/data.rst @@ -3,8 +3,8 @@ Data ============ We provide experimental data for you to try out DECODE. If you want to go through the whole -pipeline, i.e. including your own bead calibration and training parametrization -(i.e. :ref:`bead calibration and prefit; steps 1 and 2 `) you can find the URLs to +pipeline, i.e., including your own bead calibration and training parametrization +(i.e., :ref:`bead calibration and prefit; steps 1 and 2 `) you can find the URLs to download example data from our `gateway `__. If you want to omit these steps and try out DECODE directly, the @@ -31,7 +31,7 @@ If you want to fit your own data, there are few small points you need to be awar Experimental data ================= -We provide the RAW data, RAW beads, training parametrization and converged model to reproduce +We provide the raw data, raw beads, training parametrization and converged model to reproduce Figure 4 of our preprint. The notebooks automatically download this package. For manual download the link can be found in our `gateway `__ @@ -67,12 +67,12 @@ Camera Parameters +---------------------+-------------+-------------+ | spur_noise | 0.002 | 0.002 | +---------------------+-------------+-------------+ -| px_size | [100, 100] | [100, 00] | +| px_size | [100, 100] | [100, 100] | +---------------------+-------------+-------------+ :sup:`†` we typically use a *quantum efficiency* of 1. and refer to the photons as *detected photons.* -For direct challenge comparison, the photon count must then be adjusted by 1/ 0.9 (where 0.9 is the +For direct challenge comparison, the photon count then has to be adjusted by 1/ 0.9 (where 0.9 is the quantum efficiency of the camera for the simulated 3D AS/DH data). Moreover, for this data *Mirroring must be turned off* both in SMAP (Camera Parameters) as well diff --git a/docs/source/faq.rst b/docs/source/faq.rst index da5c6937..7617f2bd 100644 --- a/docs/source/faq.rst +++ b/docs/source/faq.rst @@ -33,22 +33,22 @@ Errors and Software Issues This might happen if your GPU is - 1. Doing multiple things, i.e. used not only for computation but also for + 1. doing multiple things, i.e., used not only for computation but also for the display 2. old or has to little memory If you have multiple GPU devices you may set: ``device='cuda:1'`` (where ``1`` corresponds to the respective index of the device, starting with 0). If - you don't have multiple devices, you may should try to reduce the batch size: + you don't have multiple devices, you should try to reduce the batch size: ``param.HyperParameter.batch_size``. -- I get other CUDA errors, e.g. ``CUDA error: unspecified launch failure`` +- I get other CUDA errors, e.g., ``CUDA error: unspecified launch failure`` Please check whether you have a somewhat up to date CUDA driver. It's always - a good idea to update it. Moreover you can try to check which cudatoolkit + a good idea to update it. Moreover, you can try to check which cudatoolkit version was installed by checking in the Terminal / Anaconda prompt - ``conda list`` + ``conda list``. You can also try to pin the cudatoolkit version to another one by setting `cudatoolkit=10.1` instead of plain `cudatoolkit` without version @@ -75,12 +75,12 @@ Errors and Software Issues This can happen particularly often for Windows and there is no 'one answer'. You might want to decrease the number of CPU workers or disable - multiprocessing at all. For this you would start the training with changed - number of workers workers by adding ``-w [number of workers]`` at the end of + multiprocessing at all. For this you would start the training with a changed + number of workers by adding ``-w [number of workers]`` at the end of the python command. Specify ``-w 0`` for disabling multiprocessing if even 2 - lead to an error. Alternatively change the ``.yaml`` file here ``param -> + lead to an error. Alternatively change the ``.yaml`` file here: ``param -> Hardware -> num_worker_train``. Note that this can slow down training. You can also try changing the multiprocessing strategy, which you can do in the - .yaml file. ``param -> Hardware -> torch_multiprocessing_sharing_strategy``. + .yaml file: ``param -> Hardware -> torch_multiprocessing_sharing_strategy``. The sharing strategies depend on your system. Please have a look at `Pytorch Multiprocessing `__. diff --git a/docs/source/index.rst b/docs/source/index.rst index 66f3668a..d08ce60e 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -10,8 +10,7 @@ To try out DECODE we recommend to first have a look at the Google Colab notebook DECODE on Google Colab """""""""""""""""""""" -Our notebooks below comprise training a model, fitting experimental data and exporting the -fitted localizations. +Our notebooks below comprise training a model, fitting experimental data and exporting the fitted localizations. * `Training a DECODE model `_ * `Fitting high-density data `_ @@ -21,15 +20,13 @@ DECODE on your machine The installation is described in detail here `installation instructions. `__ Once you have installed DECODE on your local machine, please follow our -`Tutorial. `__ +`Tutorial. `__ Video tutorial ############### -As part of the virtual `I2K 2020 -`__ -conference we organized a workshop on DECODE. Please find the video below. -*DECODE is being actively developed, therefore the exact commands might differ -from those shown in the video.* +As part of the virtual `I2K 2020 `__ conference we organized a workshop on DECODE. Please find the video below. + +*DECODE is being actively developed, therefore the exact commands might differ from those shown in the video.* .. raw:: html diff --git a/docs/source/installation.rst b/docs/source/installation.rst index 9aeaf199..b080a7d5 100644 --- a/docs/source/installation.rst +++ b/docs/source/installation.rst @@ -2,7 +2,7 @@ Installation ============ -For regular use, of course we recommend to install and use the framework on your +For regular use, we advise you to install and use the framework on your local machine. We strongly recommend using a machine with a modern GPU, e.g. an RTX 2080, in particular for training. To make use of your GPU it requires a CUDA capability of 3.7 or higher (see here to check if your GPU is valid: @@ -49,14 +49,14 @@ Depending on whether you have a CUDA capable GPU type: # after previous command (all platforms) conda activate decode_env -Please now get the DECODE Jupyter Notebooks +Please now get the DECODE Jupyter Notebooks. .. _notebook_install: DECODE Jupyter Notebooks """""""""""""""""""""""" -Before you start using DECODE locally you should make sure to check get our Jupyter notebooks +Before you start using DECODE locally, you should make sure to check out our Jupyter notebooks to familiarise yourself with DECODE. You can get the notebooks by specifying the directory where you want the notebooks to be saved following this command in your Terminal/Anaconda Prompt: diff --git a/docs/source/logging.rst b/docs/source/logging.rst index 622b1e74..894adfbd 100644 --- a/docs/source/logging.rst +++ b/docs/source/logging.rst @@ -4,9 +4,9 @@ Logging Currently we support monitoring the training progress in Tensorboard while basic metrics are reported to the console as well. All metrics that include comparison to ground truth emitters are based on the parameters (implicitly) provided in -the configuration .yaml file. Those include match dimensionality (i.e. in 2D or -3D), max. allowed distances. The threshold on the detection filters the -detections before matching. +the configuration .yaml file. Those include match dimensionality (i.e., in 2D or +3D) and max. allowed distances. The threshold on the detection filters the +detections prior to matching. Tensorboard ----------- @@ -17,7 +17,7 @@ Metrics +----------------+-------------------------+------------------------------------------------------------------+ | Abbreviation | Name | Description | +================+=========================+==================================================================+ -| pred | Precision | Number of true positives over all detections | +| prec | Precision | Number of true positives over all detections | +----------------+-------------------------+------------------------------------------------------------------+ | rec | Recall | Number of true positives over all (ground truth) localizations | +----------------+-------------------------+------------------------------------------------------------------+ diff --git a/docs/source/tutorial.rst b/docs/source/tutorial.rst index 8612b6ed..cbcf2864 100644 --- a/docs/source/tutorial.rst +++ b/docs/source/tutorial.rst @@ -2,12 +2,12 @@ Tutorial ======== -Here we describe how to use DECODE locally, i.e. when you want to use it on a regular basis. +Here we describe how to use DECODE locally, i.e., when you want to use it on a regular basis. If you want to test DECODE without installation you can check out the Google Colab notebooks -linked on the `starting page `__ of this documentation. +linked on the `starting page `__ of this documentation. **Note:** This tutorial assumes that you have successfully installed DECODE locally and got your -copy of the DECODE jupyter notebooks. If this is not the case for you, please refer to the +copy of the DECODE Jupyter notebooks. If this is not the case for you, please refer to the `installation instructions `__ and follow the step-by-step guide. @@ -20,14 +20,14 @@ Workflow A typical workflow for fitting high-density SMLM data with this package is 1. :ref:`Bead calibration ` and extraction of spline coefficients (e.g. in SMAP) -2. :ref:`Set training parameters ` by a pre-fitting procedure or reasonableguess. +2. :ref:`Determine training parameters ` by a pre-fitting procedure or reasonableguess. 3. :ref:`Training a DECODE model ` 4. :ref:`Fitting experimental data ` 5. :ref:`Export, visualization and analysis ` of fitted data The first two steps involving SMAP can be skipped and you can start right away with the :ref:`notebooks ` in case you want to work with our -example data, as we provide the intermediate result files (i.e. the calibration and the training +example data, as we provide the intermediate result files (i.e., the calibration and the training parametrization). If you are working with your own data or want to go through the whole workflow, just start from the beginning. You can find an overview of our data in `Data `__. @@ -38,12 +38,13 @@ You can find an overview of our data in `Data `__. Bead calibration with SMAP ========================== -1. Install the stand-alone version of SMAP from +1. Install the stand-alone version of SMAP from the software section on `rieslab.de `__ or if you have MATLAB, download the source-code from `GitHub.com/jries/SMAP `__. - There, you also find the installation instructions and the documentation. + On `rieslab.de `__, you can also find the + installation instructions and the documentation. 2. Acquire z-stacks with fluorescent beads (e.g. 100 nm beads). We typically use - a z-range of +/- 750 nm and a step size of 10-50 nm. + a z-range of +/- 1000 nm and a step size of 10-50 nm. 3. In SMAP, use the plugin *Analyze / calibrate3DSplinePSF* to generate the calibration file. The plugin can be found either via tabs *Analyze / sr3D / calibrate3DsplinePSF* or menu *Plugins / Analyze / sr3D / calibrate3DsplinePSF*. @@ -53,7 +54,7 @@ Bead calibration with SMAP `__, and in the original publication `Li et al., Nature Methods (2018) `__. Even for two-dimensional data you - need a bead calibration, in this case make sure to make the *bi directional + need a bead calibration, in this case make sure to perform the *bidirectional fit*. @@ -64,7 +65,9 @@ Determine training parameters with SMAP 1. Use the bead calibration to fit your SMLM data. Detailed instructions can be found in the `SMAP user guide - `__. + `__ + in section 5, more specifically in section 5.4 for fitting with an + experimental PSF. 2. Use the plugin: *Plugins / calibrate / DECODE\_training\_estimates* to estimate the photo-physical parameters of the experiment and to save them into a parameter file. Consult the @@ -77,19 +80,19 @@ Training a DECODE model ======================= The basis for training DECODE is a parametrization of training procedure. This parametrization is -described in a simple `.yaml` file which holds a couple of paths (e.g. the calibration file and -your output directory) as well as the parametrization of the simulation that should somewhat +described in a simple `.yaml` file which contains a couple of paths (e.g., the calibration file and +your output directory) as well as the parametrization of the simulation which should match the data you want to fit. In our Training notebook we guide you through the process of creating such a `.yaml` file that can -subsequently used to start the actual training. +subsequently be used to start the actual training. If you have gone through the notebooks already and generated your own `param.yaml` file, you can skip the following section and go to the :ref:`regular workflow ` directly. .. _First time: -First time Using DECODE +First time using DECODE ----------------------- To get you up and running, we provide several notebooks that introduce DECODE to you. @@ -100,7 +103,7 @@ In total, there are four different notebooks: - **Fitting** localizes the single molecules in the high-density data based on the model. - **Evaluation** gives you an introduction to the post-processing capabilities of DECODE. -To start going through the notebooks execute the following command in your Terminal/Anaconda Prompt: +To start going through the notebooks, execute the following command in your Terminal/Anaconda Prompt: .. code:: bash @@ -116,7 +119,7 @@ Training and Fitting. Regular workflow ---------------- -In practice you can either write such a `.yaml` file directly, i.e. by educated guessing your +In practice, you can either write such a `.yaml` file directly, i.e., by educated guessing your emitter characteristics, or follow the pre-fit routine using SMAP that will auto-generate it. Once being equipped with your calibration and the parameter file, you can start the training in @@ -128,8 +131,8 @@ your Terminal/Anaconda prompt python -m decode.neuralfitter.train.live_engine -p [path to your param].yaml -To monitor the training progress you can open up a new Terminal window/Anaconda prompt, navigate -to the respective folder from before and start tensorboard. This optional and does not have an +To monitor the training progress, you can open up a new Terminal window/Anaconda prompt, navigate +to the respective folder from before, and start Tensorboard. This is optional and does not have any influence on the training. Note that Tensorboard can be quite slow sometimes. .. code:: bash @@ -139,12 +142,12 @@ influence on the training. Note that Tensorboard can be quite slow sometimes. -.. _Fit: +.. _Fitting: -Fit -=== +Fitting +======= -Please refer to the Fit notebook which is described above in +Please refer to the Fitting notebook which is described above in :ref:`First Time using DECODE instructions. ` @@ -157,11 +160,11 @@ DECODE has basic rendering functions but for detailed visualization and analysis your data and load it into SMAP or another SMLM visualization software of your choice. For loading the data in SMAP, you can export your emitter set as h5 file at the end of the fitting notebook. -For easier input in other software we recommend exporting as csv. +For easier input in other software, we recommend exporting as csv. Under the *File* tab in SMAP, change the *auto loader* to *Import DECODE .csv/.h5* and **Load** the exported data. For detailed instructions on post-processing (grouping, filtering, drift correction,...) please consult the `SMAP Documentation `__, -more specifically from point 5 onwards in the +more specifically from section 5 onwards in the `Getting Started Guide `__ -and from point 6 on in the +and from section 6 on in the `SMAP User Guide `__.