From 8b5a14e1ed2a4f3576a873c6f6a6f0b64b311bae Mon Sep 17 00:00:00 2001 From: sfmig <33267254+sfmig@users.noreply.github.com> Date: Fri, 19 May 2023 11:23:22 +0100 Subject: [PATCH 1/7] feedback on steps up to training --- SLEAP/HowTo.md | 35 ++++++++++++++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index 9eb9085..e5939e3 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -61,6 +61,8 @@ Start an interactive job on a GPU node: ```bash $ srun -p gpu --gres=gpu:1 --pty bash -i ``` +[**SM**: maybe it would be nice to have an appendix explaining the different flags in all these commands, for people who want to learn more?] + Load the SLEAP module. This might take some seconds, but it should finish without errors. Your terminal prompt may change as a result. ``` @gpu-350-04:~$ module load SLEAP @@ -82,6 +84,11 @@ $ which conda $ which python /nfs/nhome/live//.conda/envs/sleap/bin/python ``` + +[**SM**: In my case I got different paths here, both for `which conda` and `which python`...] + + + Finally we will verify that the `sleap` python package can be imported and can "see" the GPU. We will just follow the [relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). First, start a Python interpreter: ```bash $ python @@ -109,6 +116,9 @@ GPUs: 1/1 available >>> print(tf.config.list_physical_devices('GPU')) [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] ``` +[**SM**: In my case I got a module not found error when importing sleap in the interactive node :( ] + + > **Warning** > > The `import sleap` command may take some time to run (more than a minute). This is normal. Subsequent imports should be faster. @@ -119,6 +129,7 @@ If all is as expected, you can exit the Python interpreter, and then exit the GP $ exit ``` To completely exit the HPC cluster, you will need to logout of the SSH session twice: +[**SM**: maybe in the appendix explain why we need to do this twice?] ```bash $ logout $ logout @@ -134,6 +145,8 @@ The rest of this guide assumes that you have mounted the SWC filesystem on your We will also assume that the data you are working with are stored in a `ceph` or `winstor` directory to which you have access to. In the rest of this guide, we will use the path `/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data` which contains a SLEAP project for test purposes. You should replace this with the path to your own data. +[**SM**: it could be nice to highlight that the cluster has a fast access to ceph. and maybe include notes on how to do file transfer otherwise (`scp` or equivalent?) -- this might be less relevant tho] + ## Model training This will consist of two parts - [preparing a training job](#prepare-the-training-job) (on your local SLEAP installation) and [running a training job](#run-the-training-job) (on the HPC cluster's SLEAP module). Some evaluation metrics for the trained models can be [viewed via the SLEAP GUI](#evaluate-the-trained-models) on your local SLEAP installation). @@ -142,6 +155,7 @@ This will consist of two parts - [preparing a training job](#prepare-the-trainin - Next, follow the instructions in [Remote Training](https://sleap.ai/guides/remote.html#remote-training), i.e. "Predict" -> "Run Training…" -> "Export Training Job Package…". - For selecting the right configuration parameters, see [Configuring Models](https://sleap.ai/guides/choosing-models.html#) and [Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) - Set the "Predict On" parameter to "nothing". Remote training and inference (prediction) are easiest to run separately on the HPC Cluster. + - [**SM**: also: unselect 'visualize predictions' in training settings? for me it's enabled by default IIRC. It could also be nice to explain training vs inference a bit earlier on in the guide?] - If you are working with a top-down camera view, set the "Rotation Min Angle" and "Rotation Max Angle" to -180 and 180 respectively in the "Augmentation" section. - Make sure to save the exported training job package (e.g. `labels.v001.slp.training_job.zip`) in the mounted SWC filesystem, ideally in the same directory as the project file. - Unzip the training job package. This will create a folder with the same name (minus the `.zip` extension). This folder contains everything needed to run the training job on the HPC cluster. @@ -173,7 +187,13 @@ The precise commands will depend on the model configuration you chose in SLEAP. > > Although the "Top-Down" configuration was designed with multiple animals in mind, it can also be used for single-animal videos. It makes sense to use it for videos where the animal occupies a relatively small portion of the frame - see [Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) for more info. -Next you need to create a SLURM batch script, which will schedule the training job on the HPC cluster. Create a new file called `slurm_train_script.sh` (You can do this in the terminal with `nano`/`vim` or in a text editor of your choice on your local PC/laptop). An example is provided below, followed by explanations. +Next you need to create a SLURM batch script, which will schedule the training job on the HPC cluster. Create a new file called `slurm_train_script.sh` (You can do this in the terminal with `nano`/`vim` or in a text editor of your choice on your local PC/laptop). + +[**SM**: maybe include the commands for this too? e.g. `nano slurm_train_script.sh`. It may also be good to add in an appendix the basic commands to save, and exit (especially relevant for `vim`)] + +[**SM**: to be super clear you may want to clarify here that there is a change of directory when creating the `slurm_train_script.sh`. In the last command we were inside `labels.v001.slp.training_job`, I assume we'd like the slurm bash script outside that directory] + +An example is provided below, followed by explanations. ```bash #!/bin/bash @@ -201,6 +221,8 @@ cd $JOB_DIR ./train-script.sh ``` +[**SM**: maybe replace your email in the last SBATCH command with a generic one? people may copy and paste directly and you may get some spam] + > **Note** > > The `#SBATCH` lines are SLURM directives. They specify the resources needed for the job, such as the number of nodes, CPUs, memory, etc. For more information see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). @@ -222,6 +244,11 @@ Now you can submit the batch script with: $ sbatch slurm_train_script.sh Submitted batch job 3445652 ``` + +[**SM**: maybe not required, but clarify the directory this is run from?] + +[**SM**: here I was getting permissions error, maybe you can have a box saying this happens sometimes and to run `chmod +x ./train-script.sh` to fix it?] + You may monitor the progress of the job in various ways: - View the status of the queued/running jobs with `squeue`: ```bash @@ -230,6 +257,7 @@ You may monitor the progress of the job in various ways: JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 3445652 gpu slurm_ba sirmpila R 23:11 1 gpu-sr670-20 ``` + [**SM**: the meaning of each of the columns could be included in an annex...or maybe just link the docs where it explains it in case someone wants to learn a bit more?] - View status of running/completed jobs with `sacct`: ```bash $ sacct -u @@ -251,6 +279,11 @@ You may monitor the progress of the job in various ways: $ cat slurm.gpu-sr670-20.3445652.err ``` +[**SM**: some nice extras that maybe make users a bit more engaged: +- since sleap generates tensorboard logs (at least in its latest version), it could be nice to include instructions on how to visualise those (it's super satisfying to see the progress) +- is it possible to run `nvidia-smi` in the compute node? It is always cool to see how the gpu is being used :) +] + ### Evaluate the trained models Upon successful completion of the training job, a `models` folder will have been created in the training job directory. It contains one subfolder per training run (by defalut prefixed with the date and time of the run), which holds the trained model files (e.g. `best_model.h5`), their configurations (`training_config.json`) and some evaluation metrics. ```bash From 0462d90cbd8c506ffaf46d36d7a9d90e05f42a3b Mon Sep 17 00:00:00 2001 From: niksirbi Date: Thu, 15 Jun 2023 17:54:17 +0100 Subject: [PATCH 2/7] replaced my email with generic email address) --- SLEAP/HowTo.md | 6 +- SLEAP/presentation/presentation.html | 1218 +++++------------ SLEAP/presentation/presentation.qmd | 8 +- SLEAP/scripts/infer_script.sbatch | 2 +- SLEAP/scripts/train_script.sbatch | 2 +- SLEAP/scripts/train_script_python.sbatch | 2 +- .../scripts/train_script_python_array.sbatch | 2 +- 7 files changed, 379 insertions(+), 861 deletions(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index e5939e3..d728fa0 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -206,7 +206,7 @@ An example is provided below, followed by explanations. #SBATCH -o slurm.%N.%j.out # write STDOUT #SBATCH -e slurm.%N.%j.err # write STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -221,8 +221,6 @@ cd $JOB_DIR ./train-script.sh ``` -[**SM**: maybe replace your email in the last SBATCH command with a generic one? people may copy and paste directly and you may get some spam] - > **Note** > > The `#SBATCH` lines are SLURM directives. They specify the resources needed for the job, such as the number of nodes, CPUs, memory, etc. For more information see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). @@ -329,7 +327,7 @@ Below is an example SLURM batch script that contains a `sleap-track` call. #SBATCH -o slurm.%N.%j.out # write STDOUT #SBATCH -e slurm.%N.%j.err # write STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP diff --git a/SLEAP/presentation/presentation.html b/SLEAP/presentation/presentation.html index 10e4b08..d139a6d 100644 --- a/SLEAP/presentation/presentation.html +++ b/SLEAP/presentation/presentation.html @@ -1,413 +1,140 @@ - + + + + + + + + + +Pose estimation with SLEAP + + + - + + - - - - - - - Pose estimation with SLEAP - - - - - - - - - - - - - - - - - -
-
+
+ -
-

Pose estimation with SLEAP

-

Introduction to HPC with Linux | 2023-05-15

+ -
-
-
-Niko Sirmpilatze | Neuroinformatics Unit -
-
-
- -
-
-

Course materials

-

Sample data

+
+

Course materials

+
+

Sample data

/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data

  • Mouse videos (from Loukia Katsouri)
  • @@ -418,7 +145,9 @@

    Sample data

  • prediction results
-

Github repository

+
+
+

Github repository

github.com/neuroinformatics-unit/swc-hpc-pose-estimation

  • This presentation
  • @@ -426,25 +155,33 @@

    Github repository

  • Detailed “How To” guide
-
-

Tracking animals in videos

+
+
+

Tracking animals in videos

-
-

Pose estimation

- -

source: doi.org/10.1038/s41593-020-00734-z

-
-

Existing tools

+
+

Pose estimation

+ +
+
+

Existing tools

    @@ -460,40 +197,44 @@

    Existing tools

-

+

-
-

SLEAP workflow

- -
+
+

SLEAP workflow

+
+
+

+
+
+
  • Training and inference are GPU-intensive
  • We can delegate to the HPC cluster’s GPU nodes
-
-

Label body parts

+
+

Label body parts

  • Annotate frames using the sleap-label GUI on your PC/laptop
  • Save project (e.g. labels.v001.slp)
- -
-
-

Configure training

+

+
+
+

Configure training

  • In the sleap-label GUI: Predict/Run Training...
- -
    +

    +
    • When ready, Export training job package...
-
-

Training job package contents

+
+

Training job package contents

labels.v001.slp.training_job.zip => unzip

# Copy of labelled frames
 labels.v001.pkg.slp
@@ -509,25 +250,25 @@ 

Training job package contents

# Summary of all jobs jobs.yaml
-
-

Top-down vs bottom-up

+
+

Top-down vs bottom-up

-
-

+
+

-
-

Finding our project on the HPC

-
# Logging into the HPC cluster
+
+

Finding our project on the HPC

+
# Logging into the HPC cluster
 ssh <SWC-USERNAME>@ssh.swc.ucl.ac.uk # Provide password
 ssh hpc-gw1 # Provide password again
 
@@ -547,18 +288,18 @@ 

Finding our project on the HPC

train-script.sh
-
#!/bin/bash
-sleap-train centroid.json labels.v001.pkg.slp
-sleap-train centered_instance.json labels.v001.pkg.slp
+
#!/bin/bash
+sleap-train centroid.json labels.v001.pkg.slp
+sleap-train centered_instance.json labels.v001.pkg.slp
-
-

Get SLURM to run the script

+
+

Get SLURM to run the script

- +
-
+

Suitable for debugging (immediate feedback)

  • Start an interactive job with one GPU

    @@ -572,7 +313,7 @@

    Get SLURM to run the script

    exit
-
+

Main method for submitting jobs

  • Prepare a batch script, e.g. train_script.sbatch

  • @@ -582,7 +323,7 @@

    Get SLURM to run the script

    squeue -u <SWC-USERNAME>
-
+

Useful for submitting many similar jobs

  • Write a batch script
  • @@ -592,9 +333,9 @@

    Get SLURM to run the script

-
-

Get the example scripts

-
# Clone the GitHub repository
+
+

Get the example scripts

+
# Clone the GitHub repository
 $ git clone https://github.com/neuroinformatics-unit/swc-hpc-pose-estimation.git
 
 # List the available scripts for SLEAP
@@ -610,13 +351,13 @@ 

Get the example scripts

# View the contents of the SLURM train script cat train_script.sbatch
-
-

Batch script for training

+
+

Batch script for training

train_script.sbatch
-
#!/bin/bash
+
#!/bin/bash
 
 #SBATCH -p gpu # partition (queue)
 #SBATCH -N 1   # number of nodes
@@ -627,7 +368,7 @@ 

Batch script for training

#SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -642,9 +383,9 @@

Batch script for training

./train-script.sh
-
-

Submit and monitor batch script

-
# Submit job
+
+

Submit and monitor batch script

+
# Submit job
 $ sbatch train_script.sbatch
 Submitted batch job 3445652
 
@@ -664,9 +405,9 @@ 

Submit and monitor batch script

$ cat slurm.gpu-sr670-20.3445652.out $ cat slurm.gpu-sr670-20.3445652.err
-
-

Check trained models

-
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
+
+

Check trained models

+
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
 $ cd labels.v001.slp.training_job
 $ cd models
 $ ls -1
@@ -692,13 +433,13 @@ 

Check trained models

-
-

Batch script for inference

+
+

Batch script for inference

infer_script.sbatch
-
#!/bin/bash
+
#!/bin/bash
 
 #SBATCH -p gpu # partition (queue)
 #SBATCH -N 1   # number of nodes
@@ -709,7 +450,7 @@ 

Batch script for inference

#SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -733,9 +474,9 @@

Batch script for inference

--no-empty-frames
-
-

Check the predictions

-
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
+
+

Check the predictions

+
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
 $ cd labels.v001.slp.training_job
 $ ls -1
 centered_instance.json
@@ -752,20 +493,20 @@ 

Check the predictions

-
-

The training - inference cycle

-
-
    -
  • Correct some of the predictions: see Prediction-assisted labeling
  • -
  • Merge corrected labels into the initial training set: see Merging guide
  • -
  • Save the merged training set aslabels.v002.slp
  • -
  • Export a new training job labels.v002.slp.training_job (you may reuse the training configurations from v001)
  • -
  • Repeat the training-inference cycle until satisfied
  • +
    +

    The training - inference cycle

    +
    +
      +
    • Correct some of the predictions: see Prediction-assisted labeling
    • +
    • Merge corrected labels into the initial training set: see Merging guide
    • +
    • Save the merged training set aslabels.v002.slp
    • +
    • Export a new training job labels.v002.slp.training_job (you may reuse the training configurations from v001)
    • +
    • Repeat the training-inference cycle until satisfied
    -
    -

    Batching Python scripts

    +
    +

    Batching Python scripts

    • So far we have submitted shell scripts/commands
        @@ -782,7 +523,7 @@

        Batching Python scripts

-
$ cd swc-hpc-pose-estimation/SLEAP/scripts
+
$ cd swc-hpc-pose-estimation/SLEAP/scripts
 $ ls -1
 infer_script.sbatch
 run_sleap_training.py
@@ -792,13 +533,13 @@ 

Batching Python scripts

train_script_python_array.sbatch
-
-

Example Python script

+
+

Example Python script

run_sleap_training.py
-
import argparse
+
import argparse
 from pathlib import Path
 from sleap_topdown_trainer import SLEAPTrainer_TopDown_SingleInstance
 
@@ -827,13 +568,13 @@ 

Example Python script

main(batch_size=args.batch_size)
-
-

Example batch script

+
+

Example batch script

train_script_python.sbatch
-
#!/bin/bash
+
#!/bin/bash
 
 #SBATCH -p gpu # partition (queue)
 #SBATCH -N 1   # number of nodes
@@ -844,7 +585,7 @@ 

Example batch script

#SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -858,14 +599,14 @@

Example batch script

python run_sleap_training.py --batch-size 4
-
-

Array jobs

+
+

Array jobs

What if we want to run the previous script for multiple batch sizes?

Solution: submit an array job

-
-

+
+

@@ -876,13 +617,13 @@

Array jobs

-
-

Example array job

+
+

Example array job

train_script_python_array.sbatch
-
#!/bin/bash
+
#!/bin/bash
 
 #SBATCH -p gpu # partition (queue)
 #SBATCH -N 1   # number of nodes
@@ -893,7 +634,7 @@ 

Example array job

#SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com #SBATCH --array=0-2 # Load the SLEAP module @@ -909,15 +650,18 @@

Example array job

python run_sleap_training.py --batch-size "${ARGS[$SLURM_ARRAY_TASK_ID]}"
-
-
- - - - - - - - - - - - - - - - - - - - - + popup.appendChild(citeDiv); + }); + return popup.innerHTML; + }); + } + } +}); + +
+ + \ No newline at end of file diff --git a/SLEAP/presentation/presentation.qmd b/SLEAP/presentation/presentation.qmd index 118013c..63fa919 100644 --- a/SLEAP/presentation/presentation.qmd +++ b/SLEAP/presentation/presentation.qmd @@ -227,7 +227,7 @@ cat train_script.sbatch #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -309,7 +309,7 @@ training_log.csv #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -432,7 +432,7 @@ if __name__ == "__main__": #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP @@ -475,7 +475,7 @@ What if we want to run the previous script for multiple batch sizes? #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com #SBATCH --array=0-2 # Load the SLEAP module diff --git a/SLEAP/scripts/infer_script.sbatch b/SLEAP/scripts/infer_script.sbatch index beec10f..cf606d4 100644 --- a/SLEAP/scripts/infer_script.sbatch +++ b/SLEAP/scripts/infer_script.sbatch @@ -9,7 +9,7 @@ #SBATCH -o slurm.%N.%j.out # write STDOUT #SBATCH -e slurm.%N.%j.err # write STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP diff --git a/SLEAP/scripts/train_script.sbatch b/SLEAP/scripts/train_script.sbatch index 51d1eb2..a526c3e 100644 --- a/SLEAP/scripts/train_script.sbatch +++ b/SLEAP/scripts/train_script.sbatch @@ -9,7 +9,7 @@ #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP diff --git a/SLEAP/scripts/train_script_python.sbatch b/SLEAP/scripts/train_script_python.sbatch index d64b8a9..a9ca894 100644 --- a/SLEAP/scripts/train_script_python.sbatch +++ b/SLEAP/scripts/train_script_python.sbatch @@ -9,7 +9,7 @@ #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com # Load the SLEAP module module load SLEAP diff --git a/SLEAP/scripts/train_script_python_array.sbatch b/SLEAP/scripts/train_script_python_array.sbatch index 1fddabd..aaaf101 100644 --- a/SLEAP/scripts/train_script_python_array.sbatch +++ b/SLEAP/scripts/train_script_python_array.sbatch @@ -9,7 +9,7 @@ #SBATCH -o slurm.%N.%j.out # STDOUT #SBATCH -e slurm.%N.%j.err # STDERR #SBATCH --mail-type=ALL -#SBATCH --mail-user=n.sirmpilatze@ucl.ac.uk +#SBATCH --mail-user=name@domain.com #SBATCH --array=0-2 # Load the SLEAP module From fee389cb4450392c58f26e313c1d68fe064d1344 Mon Sep 17 00:00:00 2001 From: niksirbi Date: Thu, 15 Jun 2023 18:22:07 +0100 Subject: [PATCH 3/7] moved details about SLEAP module in troubleshooting --- SLEAP/HowTo.md | 177 ++++++++++++++++++++++++++++--------------------- 1 file changed, 103 insertions(+), 74 deletions(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index d728fa0..e75f523 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -6,7 +6,7 @@ This guide explains how to test and use the [SLEAP](https://sleap.ai/) module th - [Table of contents](#table-of-contents) - [Abbreviations](#abbreviations) - [Prerequisites](#prerequisites) - - [Verify access to the HPC cluster and the SLEAP module](#verify-access-to-the-hpc-cluster-and-the-sleap-module) + - [Access to the HPC cluster and SLEAP module](#access-to-the-hpc-cluster-and-sleap-module) - [Install SLEAP on your local PC/laptop](#install-sleap-on-your-local-pclaptop) - [Mount the SWC filesystem on your local PC/laptop](#mount-the-swc-filesystem-on-your-local-pclaptop) - [Model training](#model-training) @@ -15,6 +15,8 @@ This guide explains how to test and use the [SLEAP](https://sleap.ai/) module th - [Evaluate the trained models](#evaluate-the-trained-models) - [Model inference](#model-inference) - [The training-inference cycle](#the-training-inference-cycle) + - [Troubleshooting](#troubleshooting) + - [Problems with loading/using the SLEAP module](#problems-with-loadingusing-the-sleap-module) ## Abbreviations @@ -46,94 +48,40 @@ This guide explains how to test and use the [SLEAP](https://sleap.ai/) module th ## Prerequisites -### Verify access to the HPC cluster and the SLEAP module -Log into the HPC login node (typing your `` both times when prompted): +### Access to the HPC cluster and SLEAP module +Verify that you can access HPC gateway node (typing your `` both times when prompted): ```bash $ ssh @ssh.swc.ucl.ac.uk $ ssh hpc-gw1 ``` -SLEAP should be listed as one of the available modules: -```bash -$ module avail -SLEAP/2023-03-13 -``` -Start an interactive job on a GPU node: -```bash -$ srun -p gpu --gres=gpu:1 --pty bash -i -``` -[**SM**: maybe it would be nice to have an appendix explaining the different flags in all these commands, for people who want to learn more?] +If you are wondering about the two SSH commands, see the Appendix below. -Load the SLEAP module. This might take some seconds, but it should finish without errors. Your terminal prompt may change as a result. -``` -@gpu-350-04:~$ module load SLEAP -(sleap) @gpu-350-04:~$ -``` -The hostname (the part between "@" and ":") will vary depending on which GPU node you were assigned to. -To verify that the module was loaded successfully: +SLEAP should be listed among the available modules: ```bash -$ module list -Currently Loaded Modulefiles: - 1) SLEAP/2023-03-13 -``` -The module is essentially a centrally installed conda environment. When it is loaded, you should be using particular executables for conda and Python. You can verify this by running: -```bash -$ which conda -/ceph/apps/ubuntu-20/packages/SLEAP/2023-03-13/condabin/conda - -$ which python -/nfs/nhome/live//.conda/envs/sleap/bin/python -``` - -[**SM**: In my case I got different paths here, both for `which conda` and `which python`...] +$ module avail +SLEAP/2023-06-15 +SLEAP/2023-03-13 +``` +`SLEAP/2023-03-13` corresponds to `sleap v.1.2.9` whereas `SLEAP/2023-06-15` is `v1.3.0`. We recommend using the latter. +You can load the latest version by running: -Finally we will verify that the `sleap` python package can be imported and can "see" the GPU. We will just follow the [relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). First, start a Python interpreter: ```bash -$ python -``` -Next, run the following Python commands (shown below with their expected outputs: -```python ->>> import sleap + $ module load SLEAP + ``` +If you want to load a specific version, you can do so by typing the full module name, including the date e.g. `module load SLEAP/2023-03-13` ->>> sleap.versions() -SLEAP: 1.2.9 -TensorFlow: 2.6.3 -Numpy: 1.19.5 -Python: 3.7.12 -OS: Linux-5.4.0-139-generic-x86_64-with-debian-bullseye-sid - ->>> sleap.system_summary() -GPUs: 1/1 available - Device: /physical_device:GPU:0 - Available: True - Initalized: False - Memory growth: None - ->>> import tensorflow as tf - ->>> print(tf.config.list_physical_devices('GPU')) -[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] +If a module has been successfully loaded, it will be listed when you run `module list`: +```bash +$ module list +Currently Loaded Modulefiles: + 1) SLEAP/2023-06-15 ``` -[**SM**: In my case I got a module not found error when importing sleap in the interactive node :( ] - -> **Warning** -> -> The `import sleap` command may take some time to run (more than a minute). This is normal. Subsequent imports should be faster. +If you have troubles with loading the SLEAP module, see the [Troubleshooting section](#problems-with-loadingusing-the-sleap-module). -If all is as expected, you can exit the Python interpreter, and then exit the GPU node -```python ->>> exit() -$ exit -``` -To completely exit the HPC cluster, you will need to logout of the SSH session twice: -[**SM**: maybe in the appendix explain why we need to do this twice?] -```bash -$ logout -$ logout -``` ### Install SLEAP on your local PC/laptop While you can delegate the GPU-intensive work to the HPC cluster, you will still need to do some steps, such as labelling frames, on the SLEAP graphical user interface. Thus, you also need to install SLEAP on your local PC/laptop. @@ -375,3 +323,84 @@ Now that you have some predictions, you can keep improving your models by repeat - Export a new training job `labels.v002.slp.training_job` (you may reuse the training configurations from `v001`) - Repeat the training-inference cycle until satisfied +## Troubleshooting + +### Problems with loading/using the SLEAP module + +In this section, we will describe how to test that the SLEAP module is loaded correctly for you and that it can see the GPU. + +Login to the HPC cluster as described [above](#access-to-the-hpc-cluster-and-sleap-module). + +Start an interactive job on a GPU node: +```bash +$ srun -p gpu --gres=gpu:1 --pty bash -i +``` +[**SM**: maybe it would be nice to have an appendix explaining the different flags in all these commands, for people who want to learn more?] + +Load the SLEAP module. +```bash +$ module load SLEAP +``` + +To verify that the module was loaded successfully: +```bash +$ module list +Currently Loaded Modulefiles: + 1) SLEAP/2023-06-15 +``` +The module is essentially a centrally installed conda environment. When it is loaded, you should be using particular executables for conda and Python. You can verify this by running: +```bash +$ which conda +/ceph/apps/ubuntu-20/packages/SLEAP/2023-06-15/bin/conda + +$ which python +/ceph/apps/ubuntu-20/packages/SLEAP/2023-06-15/bin/python +``` + +Finally we will verify that the `sleap` python package can be imported and can "see" the GPU. We will just follow the [relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). First, start a Python interpreter: +```bash +$ python +``` +Next, run the following Python commands (shown below with their expected outputs: +```python +>>> import sleap + +>>> sleap.versions() +SLEAP: 1.2.9 +TensorFlow: 2.6.3 +Numpy: 1.19.5 +Python: 3.7.12 +OS: Linux-5.4.0-139-generic-x86_64-with-debian-bullseye-sid + +>>> sleap.system_summary() +GPUs: 1/1 available + Device: /physical_device:GPU:0 + Available: True + Initalized: False + Memory growth: None + +>>> import tensorflow as tf + +>>> print(tf.config.list_physical_devices('GPU')) +[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] +``` +[**SM**: In my case I got a module not found error when importing sleap in the interactive node :( ] + + +> **Warning** +> +> The `import sleap` command may take some time to run (more than a minute). This is normal. Subsequent imports should be faster. + +If all is as expected, you can exit the Python interpreter, and then exit the GPU node +```python +>>> exit() +$ exit +``` +To completely exit the HPC cluster, you will need to logout of the SSH session twice: +[**SM**: maybe in the appendix explain why we need to do this twice?] +```bash +$ logout +$ logout +``` + + From 9aa891450714a368b101dc8b6e5692712593cdef Mon Sep 17 00:00:00 2001 From: niksirbi Date: Wed, 9 Aug 2023 18:53:10 +0100 Subject: [PATCH 4/7] added appendices --- SLEAP/HowTo.md | 214 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 164 insertions(+), 50 deletions(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index e75f523..29c3783 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -1,5 +1,6 @@ # How to use the SLEAP module -This guide explains how to test and use the [SLEAP](https://sleap.ai/) module that is installed on the SWC's HPC cluster for running training and/or inference jobs. +This guide explains how to test and use the [SLEAP](https://sleap.ai/) module that is +installed on the SWC's HPC cluster for running training and/or inference jobs. ## Table of contents - [How to use the SLEAP module](#how-to-use-the-sleap-module) @@ -16,7 +17,10 @@ This guide explains how to test and use the [SLEAP](https://sleap.ai/) module th - [Model inference](#model-inference) - [The training-inference cycle](#the-training-inference-cycle) - [Troubleshooting](#troubleshooting) - - [Problems with loading/using the SLEAP module](#problems-with-loadingusing-the-sleap-module) + - [Problems with the SLEAP module](#problems-with-the-sleap-module) + - [Appendix](#appendix) + - [SLURM arguments primer](#slurm-arguments-primer) + - [Why do we SSH twice?](#why-do-we-ssh-twice) ## Abbreviations @@ -54,17 +58,17 @@ Verify that you can access HPC gateway node (typing your `` both t $ ssh @ssh.swc.ucl.ac.uk $ ssh hpc-gw1 ``` -If you are wondering about the two SSH commands, see the Appendix below. +If you are wondering about the two SSH commands, see the Appendix for [Why do we SSH twice?](#why-do-we-ssh-twice). SLEAP should be listed among the available modules: ```bash $ module avail -SLEAP/2023-06-15 +SLEAP/2023-08-01 SLEAP/2023-03-13 ``` -`SLEAP/2023-03-13` corresponds to `sleap v.1.2.9` whereas `SLEAP/2023-06-15` is `v1.3.0`. We recommend using the latter. +`SLEAP/2023-03-13` corresponds to `sleap v.1.2.9` whereas `SLEAP/2023-08-01` is `v1.3.1`. We recommend using the latter. You can load the latest version by running: @@ -73,27 +77,34 @@ You can load the latest version by running: ``` If you want to load a specific version, you can do so by typing the full module name, including the date e.g. `module load SLEAP/2023-03-13` -If a module has been successfully loaded, it will be listed when you run `module list`: +If a module has been successfully loaded, it will be listed when you run `module list`, +along with other modules it may depend on: ```bash $ module list Currently Loaded Modulefiles: - 1) SLEAP/2023-06-15 + 1) cuda/11.8 2) SLEAP/2023-08-01 ``` -If you have troubles with loading the SLEAP module, see the [Troubleshooting section](#problems-with-loadingusing-the-sleap-module). +If you have troubles with loading the SLEAP module, see the +[Troubleshooting section](#problems-with-the-sleap-module). ### Install SLEAP on your local PC/laptop While you can delegate the GPU-intensive work to the HPC cluster, you will still need to do some steps, such as labelling frames, on the SLEAP graphical user interface. Thus, you also need to install SLEAP on your local PC/laptop. -We recommend following the official [SLEAP installation guide](https://sleap.ai/installation.html). To be on the safe side, ensure that the local installation is the same version as the one on the cluster - version `1.2.9`. +We recommend following the official [SLEAP installation guide](https://sleap.ai/installation.html). To be on the safe side, ensure that the local installation is the same version as the one on the cluster. ### Mount the SWC filesystem on your local PC/laptop The rest of this guide assumes that you have mounted the SWC filesystem on your local PC/laptop. If you have not done so, please follow the relevant instructions on the [SWC internal wiki](https://wiki.ucl.ac.uk/display/SSC/SWC+Storage+Platform+Overview). We will also assume that the data you are working with are stored in a `ceph` or `winstor` directory to which you have access to. In the rest of this guide, we will use the path `/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data` which contains a SLEAP project for test purposes. You should replace this with the path to your own data. -[**SM**: it could be nice to highlight that the cluster has a fast access to ceph. and maybe include notes on how to do file transfer otherwise (`scp` or equivalent?) -- this might be less relevant tho] +> **Note** +> +> The cluster has fast acess to data stored in `ceph` and `winstor` filesystems. If your data is stored elsewhere, make sure to transfer it to `ceph` or `winstor` before running the job. You can use tools such as [`rsync`](https://linux.die.net/man/1/rsync) to copy data from your local machine to `ceph` via an ssh connection. For example: +> ```bash +> $ rsync -avz @ssh.swc.ucl.ac.uk:/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data +> ``` ## Model training This will consist of two parts - [preparing a training job](#prepare-the-training-job) (on your local SLEAP installation) and [running a training job](#run-the-training-job) (on the HPC cluster's SLEAP module). Some evaluation metrics for the trained models can be [viewed via the SLEAP GUI](#evaluate-the-trained-models) on your local SLEAP installation). @@ -102,8 +113,7 @@ This will consist of two parts - [preparing a training job](#prepare-the-trainin - Follow the SLEAP instructions for [Creating a Project](https://sleap.ai/tutorials/new-project.html) and [Initial Labelling](https://sleap.ai/tutorials/initial-labeling.html). Ensure that the project file (e.g. `labels.v001.slp`) is saved in the mounted SWC filesystem (as opposed to your local filesystem). - Next, follow the instructions in [Remote Training](https://sleap.ai/guides/remote.html#remote-training), i.e. "Predict" -> "Run Training…" -> "Export Training Job Package…". - For selecting the right configuration parameters, see [Configuring Models](https://sleap.ai/guides/choosing-models.html#) and [Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) - - Set the "Predict On" parameter to "nothing". Remote training and inference (prediction) are easiest to run separately on the HPC Cluster. - - [**SM**: also: unselect 'visualize predictions' in training settings? for me it's enabled by default IIRC. It could also be nice to explain training vs inference a bit earlier on in the guide?] + - Set the "Predict On" parameter to "nothing". Remote training and inference (prediction) are easiest to run separately on the HPC Cluster. Also unselect "visualize predictions" in training settings, if it's enabled by default. - If you are working with a top-down camera view, set the "Rotation Min Angle" and "Rotation Max Angle" to -180 and 180 respectively in the "Augmentation" section. - Make sure to save the exported training job package (e.g. `labels.v001.slp.training_job.zip`) in the mounted SWC filesystem, ideally in the same directory as the project file. - Unzip the training job package. This will create a folder with the same name (minus the `.zip` extension). This folder contains everything needed to run the training job on the HPC cluster. @@ -135,11 +145,11 @@ The precise commands will depend on the model configuration you chose in SLEAP. > > Although the "Top-Down" configuration was designed with multiple animals in mind, it can also be used for single-animal videos. It makes sense to use it for videos where the animal occupies a relatively small portion of the frame - see [Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) for more info. -Next you need to create a SLURM batch script, which will schedule the training job on the HPC cluster. Create a new file called `slurm_train_script.sh` (You can do this in the terminal with `nano`/`vim` or in a text editor of your choice on your local PC/laptop). - -[**SM**: maybe include the commands for this too? e.g. `nano slurm_train_script.sh`. It may also be good to add in an appendix the basic commands to save, and exit (especially relevant for `vim`)] +Next you need to create a SLURM batch script, which will schedule the training job on the HPC cluster. Create a new file called `slurm_train_script.sh` (You can do this in the terminal with `nano/vim` or in a text editor of your choice on your local PC/laptop). Here we create the script in the same folder as the training job, but you can save it anywhere you want, or even keep track of it with `git`. -[**SM**: to be super clear you may want to clarify here that there is a change of directory when creating the `slurm_train_script.sh`. In the last command we were inside `labels.v001.slp.training_job`, I assume we'd like the slurm bash script outside that directory] +```bash +nano slurm_train_script.sh +``` An example is provided below, followed by explanations. ```bash @@ -169,31 +179,30 @@ cd $JOB_DIR ./train-script.sh ``` +In `nano`, you can save the file by pressing `Ctrl+O` and exit by pressing `Ctrl+X`. + > **Note** > -> The `#SBATCH` lines are SLURM directives. They specify the resources needed for the job, such as the number of nodes, CPUs, memory, etc. For more information see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). +> The `#SBATCH` lines are SLURM directives. They specify the resources needed for the job, such as the number of nodes, CPUs, memory, etc. A primer on the most useful SLURM arguments is provided in the [appendix](#slurm-arguments-primer). +> For more information see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). > -> - the `-p gpu` and `--gres gpu:1` options ensure that your job will run on a GPU. If you want to request a specific GPU type, you can do so with the syntax `--gres gpu:rtx2080:1`. You can view the available GPU types on the [SWC internal wiki](https://wiki.ucl.ac.uk/display/SSC/CPU+and+GPU+Platform+architecture). -> - the `--mem` option refers to CPU memory (RAM), not the GPU one. However, the jobs often contain steps that use the RAM. -> - the `-t` option should be your time estimate for how long the job will take. If it's too short, SLURM will terminate the job before it's over. If it's too long, it may take some time to be scheduled (depending on resource availability). With time, you will build experience on how long various jobs take. It's best to start by running small jobs (e.g. reduce the number of epochs) and scale up gradually. -> - `-o` and `-e` allow you to specify files to which the standard output and error will be directed. In the example scipt above, the filenames are set to contain the node name (`%N`) and the job ID (`$j`). -> - The `--mail-type` and `--mail-user` options allow you to get email notifications about the progress of your job. Currently email notifications are not working on the SWC HPC cluster, but this might be fixed in the future. -> +> The `#` lines are comments. They are not executed by SLURM, but they are useful for explaining the script. +> > The `module load SLEAP` line loads the SLEAP module, which we checked earlier. > -> The `cd` line changes the working directory to the training job folder. This is necessary because the `train-script.sh` file contains relative paths to th model configuration and the project file. +> The `cd` line changes the working directory to the training job folder. This is necessary because the `train-script.sh` file contains relative paths to the model configuration and the project file. > > The `./train-script.sh` line runs the training job (executes the containe commands) -Now you can submit the batch script with: +Now you can submit the batch script via running the following command (in the same directory as the script): ```bash $ sbatch slurm_train_script.sh Submitted batch job 3445652 ``` - -[**SM**: maybe not required, but clarify the directory this is run from?] - -[**SM**: here I was getting permissions error, maybe you can have a box saying this happens sometimes and to run `chmod +x ./train-script.sh` to fix it?] +> **Warning** +> +> If you are getting a permission error, make sure to make the script files executable by running `chmod +x train-script.sh` and `chmod +x slurm_train_script.sh` in the terminal. +> If the scripts are not in the same folder, you will need to specify the full path: `chmod +x /path/to/script.sh` You may monitor the progress of the job in various ways: - View the status of the queued/running jobs with `squeue`: @@ -325,17 +334,22 @@ Now that you have some predictions, you can keep improving your models by repeat ## Troubleshooting -### Problems with loading/using the SLEAP module +### Problems with the SLEAP module -In this section, we will describe how to test that the SLEAP module is loaded correctly for you and that it can see the GPU. +In this section, we will describe how to test that the SLEAP module is loaded +correctly for you and that it can use the available GPUs. Login to the HPC cluster as described [above](#access-to-the-hpc-cluster-and-sleap-module). -Start an interactive job on a GPU node: +Start an interactive job on a GPU node. This step is necessary, because we need +to test the module's access to the GPU. ```bash -$ srun -p gpu --gres=gpu:1 --pty bash -i +$ srun -p fast --gres=gpu:1 --pty bash -i ``` -[**SM**: maybe it would be nice to have an appendix explaining the different flags in all these commands, for people who want to learn more?] +> **Note** +> +> The `-i` stands for "interactive", while `--pty` is short for "pseudo-terminal". +> Taken together, the above command will start an interactive bash terminal session on a node of the "fast" partition, equipped with 1 GPU. Load the SLEAP module. ```bash @@ -346,46 +360,50 @@ To verify that the module was loaded successfully: ```bash $ module list Currently Loaded Modulefiles: - 1) SLEAP/2023-06-15 + 1) SLEAP/2023-08-01 ``` -The module is essentially a centrally installed conda environment. When it is loaded, you should be using particular executables for conda and Python. You can verify this by running: -```bash -$ which conda -/ceph/apps/ubuntu-20/packages/SLEAP/2023-06-15/bin/conda +You can essentially think of the module as a centrally installed conda environment. +When it is loaded, you should be using a particular Python executable. +You can verify this by running: +```bash $ which python -/ceph/apps/ubuntu-20/packages/SLEAP/2023-06-15/bin/python +/ceph/apps/ubuntu-20/packages/SLEAP/2023-08-01/bin/python ``` -Finally we will verify that the `sleap` python package can be imported and can "see" the GPU. We will just follow the [relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). First, start a Python interpreter: +Finally we will verify that the `sleap` python package can be imported and can +"see" the GPU. We will mostly just follow the +[relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). +First, start a Python interpreter: ```bash $ python ``` -Next, run the following Python commands (shown below with their expected outputs: +Next, run the following Python commands (shown below with their expected outputs): ```python >>> import sleap >>> sleap.versions() -SLEAP: 1.2.9 -TensorFlow: 2.6.3 -Numpy: 1.19.5 +SLEAP: 1.3.1 +TensorFlow: 2.8.4 +Numpy: 1.21.6 Python: 3.7.12 -OS: Linux-5.4.0-139-generic-x86_64-with-debian-bullseye-sid +OS: Linux-5.4.0-109-generic-x86_64-with-debian-bullseye-sid >>> sleap.system_summary() GPUs: 1/1 available Device: /physical_device:GPU:0 Available: True Initalized: False - Memory growth: None + Memory growth: None >>> import tensorflow as tf >>> print(tf.config.list_physical_devices('GPU')) [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] -``` -[**SM**: In my case I got a module not found error when importing sleap in the interactive node :( ] +>>> tf.constant("Hello world!") + +``` > **Warning** > @@ -397,10 +415,106 @@ If all is as expected, you can exit the Python interpreter, and then exit the GP $ exit ``` To completely exit the HPC cluster, you will need to logout of the SSH session twice: -[**SM**: maybe in the appendix explain why we need to do this twice?] ```bash $ logout $ logout ``` +See [Why do we SSH twice?](#why-do-we-ssh-twice) in the Appendix for an explanation. + +## Appendix + +### SLURM arguments primer + +Here are the most important SLURM arguments used in the above examples +in conjunction with `sbatch` or `srun`. + +**Partition (Queue)** +- Name: `--partition` +- Alias: `-p` +- Description: Specifies the partition (or queue) to submit the job to. In this case, the job will be submitted to the "gpu" partition. +- Example values: `gpu`, `cpu`, `fast`, `medium` + +**Job Name** +- Name: `--job-name` +- Alias: `-J` +- Description: Specifies a name for the job, which will appear in various SLURM commands and logs, making it easier to identify the job (especially when you have multiple jobs queued up) +- Example values: `training_run_24` + +**Number of Nodes** +- Name: `--nodes` +- Alias: `-N` +- Description: Defines the number of nodes required for the job. +- Example values: `1` +- Note: This should always be `1`, unless you really know what you're doing + +**Number of Cores** +- Name: `--ntasks` +- Alias: `-n` +- Description: Defines the number of cores (or tasks) required for the job. +- Example values: `1`, `4`, `8` + +**Memory Pool for All Cores** +- Name: `--mem` +- Description: Specifies the total amount of memory (RAM) required for the job across all cores (per node) +- Example values: `8G`, `16G`, `32G` + +**Time Limit** +- Name: `--time` +- Alias: `-t` +- Description: Sets the maximum time the job is allowed to run. The format is D-HH:MM, where D is days, HH is hours, and MM is minutes. +- Example values: `0-01:00` (1 hour), `0-04:00` (4 hours), `1-00:00` (1 day). +- Note: If the job exceeds the time limit, it will be terminated by SLURM. On the other hand, avoid requesting way more time than what your job needs, as this may delay its scheduling (depending on resource availability). + +**Generic Resources (GPUs)** +* Name: `--gres` +* Description: Requests generic resources, such as GPUs. +* Example values: `gpu:1`, `gpu:rtx2080:1`, `gpu:rtx5000:1`, `gpu:a100_2g.10gb:1` +* Note: No GPU will be allocated to you unless you specify it via the `--gres` argument (ecen if you are on the "GPU" partition. To request 1 GPU of any kind, use `--gres gpu:1`. To request a specific GPU type, you have to include its name, e.g. `--gres gpu:rtx2080:1`. You can view the available GPU types on the [SWC internal wiki](https://wiki.ucl.ac.uk/display/SSC/CPU+and+GPU+Platform+architecture). + +**Standard Output File** +- Name: `--output` +- Alias: `-o` +- Description: Defines the file where the standard output (STDOUT) will be written. In the examples scripts, it's set to slurm.%N.%j.out, where %N is the node name and %j is the job ID. +- Example values: `slurm.%N.%j.out`, `slurm.MyAwesomeJob.out` +- Note: this file contains the output of the commands executed by the job (i.e. the messages that normally gets printed on the terminal). + +**Standard Error File** +- Name: `--error` +- Alias: `-e` +- Description: Specifies the file where the standard error (STDERR) will be written. In the examples, it's set to slurm.%N.%j.err, where %N is the node name and %j is the job ID. +- Example values: `slurm.%N.%j.err`, `slurm.MyAwesomeJob.err` +- Note: this file is very useful for debugging, as it contains all the error messages produced by the commands executed by the job. + +**Email Notifications** +* Name: `--mail-type` +* Description: Defines the conditions under which the user will be notified by email. +Example values: `ALL`, `BEGIN`, `END`, `FAIL` + +**Email Address** +* Name: `--mail-user` +* Description: Specifies the email address to which notifications will be sent. +* Note: currently this feature does not work on the SWC HPC cluster. + +**Array jobs** +* Name: `--array` +* Description: Job array index values (a list of integers in increasing order). The task index can be accessed via the `SLURM_ARRAY_TASK_ID` environment variable. +* Example values: `--array=1-10`, `--array=1-100%5` (100 jobs, but only 5 of them will be allowed to run in parallel at any given time). +* Note: if an array consists of many jobs, using the `%` syntax to limit the maximum number of parallel jobs is recommended to prevent overloading the cluster. + + +### Why do we SSH twice? + +We first need to distinguish the different types of nodes on the SWC HPC system: + +- the *bastion* node (or "jump host") - `ssh.swc.ucl.ac.uk`. This serves as a single entry point to the cluster from external networks. By funneling all external SSH connections through this node, it's easier to monitor, log, and control access, reducing the attack surface. The *bastion* node has very little processing power. It can be used to submit and monitor SLURM jobs, but it shouldn't be used for anything else. +- the *gateway* node - `hpc-gw1`. This is a more powerful machine and can be used for light processing, such as editing your scripts, creating and copying files etc. However don't use it for anything computationally intensive, since this node's resources are shared across all users. +- the *compute* nodes - `enc1-node10`, `gpu-sr670-21`, etc. These are the machinces that actually run the jobs we submit, either interactively via `srun` or via batch scripts submitted with `sbatch`. + + +The home directory, as well as the locations where filesystems like `ceph` are mounted, are shared across all of the nodes. + +The first `ssh` command - `ssh @ssh.swc.ucl.ac.uk` only takes you to the *bastion* node. A second command - `ssh hpc-gw1` - is needed to reach the *gateway* node. +Similarly, if you are on the *gateway* node, typing `logout` once will only get you one layer outo the *bastion* node. You need to type `logout` again to exit the *bastion* node and return to your local machine. +The *compute* nodes should only be accessed via the SLURM `srun` or `sbatch` commands. This can be done from either the *bastion* or the *gateway* nodes. If you are running an interactive job on one of the *compute* nodes, you can terminate it by typing `exit`. This will return you to the node from which you entered. From 437a6962787e07a4f111d7840ce93d6863be7874 Mon Sep 17 00:00:00 2001 From: niksirbi Date: Thu, 10 Aug 2023 14:49:20 +0100 Subject: [PATCH 5/7] add graph describing HPC access --- SLEAP/HowTo.md | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index 29c3783..1cdbb22 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -328,7 +328,7 @@ You can use the SLEAP GUI on your local machine to load and view the predictions Now that you have some predictions, you can keep improving your models by repeating the training-inference cycle. The basic steps are: - Manually correct some of the predictions: see [Prediction-assisted labeling](https://sleap.ai/tutorials/assisted-labeling.html) - Merge corrected labels into the initial training set: see [Merging guide](https://sleap.ai/guides/merging.html) -- Save the merged training set as`labels.v002.slp` +- Save the merged training set as `labels.v002.slp` - Export a new training job `labels.v002.slp.training_job` (you may reuse the training configurations from `v001`) - Repeat the training-inference cycle until satisfied @@ -510,6 +510,26 @@ We first need to distinguish the different types of nodes on the SWC HPC system: - the *gateway* node - `hpc-gw1`. This is a more powerful machine and can be used for light processing, such as editing your scripts, creating and copying files etc. However don't use it for anything computationally intensive, since this node's resources are shared across all users. - the *compute* nodes - `enc1-node10`, `gpu-sr670-21`, etc. These are the machinces that actually run the jobs we submit, either interactively via `srun` or via batch scripts submitted with `sbatch`. +```mermaid +%%{init: {'theme':'neutral'}}%% +flowchart LR + + L(fa:fa-laptop Your Machine ) -->|"ssh \n user@ssh.swc.ucl.ac.uk"| B(fa:fa-server Bastion) + B -->|"ssh \n hpc-gw1"| G(fa:fa-server Gateway) + B -->|"srun \n sbatch"| S{fa:fa-network-wired SLURM} + G -->|"srun \n sbatch"| S + + subgraph "Compute Nodes" + N1(fa:fa-server 1) + N2(fa:fa-server 2) + N3(fa:fa-server N) + end + + S --> N1 + S --> N2 + S --> N3 +``` + The home directory, as well as the locations where filesystems like `ceph` are mounted, are shared across all of the nodes. From 22f9cb395659967f40e97791642c38efe1fcb184 Mon Sep 17 00:00:00 2001 From: niksirbi Date: Thu, 10 Aug 2023 14:54:23 +0100 Subject: [PATCH 6/7] try embedding mermaid graph with img link --- SLEAP/HowTo.md | 20 +------------------- 1 file changed, 1 insertion(+), 19 deletions(-) diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index 1cdbb22..12e54e0 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -510,25 +510,7 @@ We first need to distinguish the different types of nodes on the SWC HPC system: - the *gateway* node - `hpc-gw1`. This is a more powerful machine and can be used for light processing, such as editing your scripts, creating and copying files etc. However don't use it for anything computationally intensive, since this node's resources are shared across all users. - the *compute* nodes - `enc1-node10`, `gpu-sr670-21`, etc. These are the machinces that actually run the jobs we submit, either interactively via `srun` or via batch scripts submitted with `sbatch`. -```mermaid -%%{init: {'theme':'neutral'}}%% -flowchart LR - - L(fa:fa-laptop Your Machine ) -->|"ssh \n user@ssh.swc.ucl.ac.uk"| B(fa:fa-server Bastion) - B -->|"ssh \n hpc-gw1"| G(fa:fa-server Gateway) - B -->|"srun \n sbatch"| S{fa:fa-network-wired SLURM} - G -->|"srun \n sbatch"| S - - subgraph "Compute Nodes" - N1(fa:fa-server 1) - N2(fa:fa-server 2) - N3(fa:fa-server N) - end - - S --> N1 - S --> N2 - S --> N3 -``` +[![](https://mermaid.ink/img/pako:eNp9Uk2LwjAQ_SshULqCFdRbD8viLnhRD5Y9LOQypqMNtknJxxap_e87tQrWw-aSTN68mTcfLZcmR57yKGqVVj5lbewLrDBOY43BWyjjrosioY-laWQB1rPNXmihGZ3N2xHSIyQl1N7U7McEy7YgC6WRTViSvF8Fd65ggtyDQ_tBxsw1chZkOQO6zoJf2eoehRx-0bIVOK-MngwpVi9hilomp2be89Zj3ho8NnB55dmgb0R3AC-Lnpe1A0-jb4w9J42ymLNs873fdgN5_Q_5UbsLh5OFmlTxT1PVwSPbUSud4APen918LHE-ecIWY2zxjC3H2O6Ooc4f6bNeIiUYWYuRtRSaT3mFtgKV04jbHhX8Nl7BU3reJ9xL7sgVgjfZRUueehtwykOdU0-_FFChFSc9paNfzJU3djuszW17uj-fELZE?type=png)](https://mermaid.live/edit#pako:eNp9Uk2LwjAQ_SshULqCFdRbD8viLnhRD5Y9LOQypqMNtknJxxap_e87tQrWw-aSTN68mTcfLZcmR57yKGqVVj5lbewLrDBOY43BWyjjrosioY-laWQB1rPNXmihGZ3N2xHSIyQl1N7U7McEy7YgC6WRTViSvF8Fd65ggtyDQ_tBxsw1chZkOQO6zoJf2eoehRx-0bIVOK-MngwpVi9hilomp2be89Zj3ho8NnB55dmgb0R3AC-Lnpe1A0-jb4w9J42ymLNs873fdgN5_Q_5UbsLh5OFmlTxT1PVwSPbUSud4APen918LHE-ecIWY2zxjC3H2O6Ooc4f6bNeIiUYWYuRtRSaT3mFtgKV04jbHhX8Nl7BU3reJ9xL7sgVgjfZRUueehtwykOdU0-_FFChFSc9paNfzJU3djuszW17uj-fELZE) The home directory, as well as the locations where filesystems like `ceph` are mounted, are shared across all of the nodes. From dc5181902ac49651faa8f5f6a975878c76ed441a Mon Sep 17 00:00:00 2001 From: niksirbi Date: Thu, 10 Aug 2023 17:28:46 +0100 Subject: [PATCH 7/7] include the flowchart image in the repo --- SLEAP/HowTo.md | 3 +-- img/swc_hpc_access_flowchart.png | Bin 0 -> 42921 bytes 2 files changed, 1 insertion(+), 2 deletions(-) create mode 100644 img/swc_hpc_access_flowchart.png diff --git a/SLEAP/HowTo.md b/SLEAP/HowTo.md index 12e54e0..039506b 100644 --- a/SLEAP/HowTo.md +++ b/SLEAP/HowTo.md @@ -510,8 +510,7 @@ We first need to distinguish the different types of nodes on the SWC HPC system: - the *gateway* node - `hpc-gw1`. This is a more powerful machine and can be used for light processing, such as editing your scripts, creating and copying files etc. However don't use it for anything computationally intensive, since this node's resources are shared across all users. - the *compute* nodes - `enc1-node10`, `gpu-sr670-21`, etc. These are the machinces that actually run the jobs we submit, either interactively via `srun` or via batch scripts submitted with `sbatch`. -[![](https://mermaid.ink/img/pako:eNp9Uk2LwjAQ_SshULqCFdRbD8viLnhRD5Y9LOQypqMNtknJxxap_e87tQrWw-aSTN68mTcfLZcmR57yKGqVVj5lbewLrDBOY43BWyjjrosioY-laWQB1rPNXmihGZ3N2xHSIyQl1N7U7McEy7YgC6WRTViSvF8Fd65ggtyDQ_tBxsw1chZkOQO6zoJf2eoehRx-0bIVOK-MngwpVi9hilomp2be89Zj3ho8NnB55dmgb0R3AC-Lnpe1A0-jb4w9J42ymLNs873fdgN5_Q_5UbsLh5OFmlTxT1PVwSPbUSud4APen918LHE-ecIWY2zxjC3H2O6Ooc4f6bNeIiUYWYuRtRSaT3mFtgKV04jbHhX8Nl7BU3reJ9xL7sgVgjfZRUueehtwykOdU0-_FFChFSc9paNfzJU3djuszW17uj-fELZE?type=png)](https://mermaid.live/edit#pako:eNp9Uk2LwjAQ_SshULqCFdRbD8viLnhRD5Y9LOQypqMNtknJxxap_e87tQrWw-aSTN68mTcfLZcmR57yKGqVVj5lbewLrDBOY43BWyjjrosioY-laWQB1rPNXmihGZ3N2xHSIyQl1N7U7McEy7YgC6WRTViSvF8Fd65ggtyDQ_tBxsw1chZkOQO6zoJf2eoehRx-0bIVOK-MngwpVi9hilomp2be89Zj3ho8NnB55dmgb0R3AC-Lnpe1A0-jb4w9J42ymLNs873fdgN5_Q_5UbsLh5OFmlTxT1PVwSPbUSud4APen918LHE-ecIWY2zxjC3H2O6Ooc4f6bNeIiUYWYuRtRSaT3mFtgKV04jbHhX8Nl7BU3reJ9xL7sgVgjfZRUueehtwykOdU0-_FFChFSc9paNfzJU3djuszW17uj-fELZE) - +![](../img/swc_hpc_access_flowchart.png) The home directory, as well as the locations where filesystems like `ceph` are mounted, are shared across all of the nodes. diff --git a/img/swc_hpc_access_flowchart.png b/img/swc_hpc_access_flowchart.png new file mode 100644 index 0000000000000000000000000000000000000000..8594302d96706d1121ee33ad82da8679df39a87e GIT binary patch literal 42921 zcmeFZX*iW#-v_)UGZhh`jAfoHLn2e8$ru%x3za!!mNChcvB{J|DoTb*W?T>#{I%Gg2rNmhF1l zMik0YB?^V=3_UIWG9^x7#Fzk{#SAqvI!m1$fH zOT7WT_PEIa+Y7ZF%87P6#`uS$TqQ+<=(tvJ)0RCgsh1ZpX&{aXVlk*SvZD)US&1qBa(A(ykvZN1c zeLiKoCuwHSN(nu=^yDgq!btCg(=y++Q^n5pn6!<(tF42y?=d%=FNLD2?(1e_cf`Sq z-`3%2U~=jY*I?6=#*&hLnwvb~_X8l$SO3J!40 z!OMo<_t;SvPZi%Sg1?Weg8wIfmJ#It{SmJtTLjH_8SrbldN}ZJklr9IE2ZP>lVgVSh-)NPF8qhrqwK2s(Osxv9v=`1trp`zT1edK{LKQ&v`%k(HN`mzTmPq&!c! zc-i<$xp)eZL;N|0wu7gghm)I^ldB6qIi`)R>v69wf`WLT|DS`CTkx-=yLkRH2pA3- z@{Ww0w5-g(f85LI(ErQF$vgk`aq?vq11DdHqvqO9#~fTdaSB@moo5K!fPWE1o|MtEAdPmQ}>98ZN<-dJrkB5^3MuA)}z9c9^#!}{= z!Nk3h`Pa=MBlQ3M>pws8e;VWe4A*~#>;E(Y|ECZCXLkK(xc*Ng@PGR7e`eSJzlMwP zzu=vN3oxAzz}r6a=lcQtmU?P#H=)NXfZjfuLgA-u*VZucef9Z`kGb)O!A)P!7i(rVcA&6&6vFx@=Q-Naa`yx^cToy=G3Qo z8O=yFxv1e8q`ksWV`YVUPYRRAld~7jez3o|`KdvYo-o@A);}*XS=MVI(d5NW6MS)9 zj-pMChBkRIYMnCOs_j}65*9tOYi8x|Pbbz{~DBFB2rM9PXIv z4+h8U@cBP4_O<`KBPeL(%dGUzU)8AMLX9f9+*V9jCX+u|MMwXSKj~?h?3JyR(bn!_ z{pSEiE11{ktJv(wX>Iy@^pRXUYqOU)3~Nf=hKuAtdL6R;{DR^UPAcS76LWrFhw$MSA4nB%(m^n|DvsJddAX(oHt&219Iflf4?G{L&EYccy%KGyu+Gb z%oyETZ)hK#&!#uyt*NbTa_Pkj^Sw)|QmU!~D@Rw8i!|LB$%CT~tGK^gdTEP)Q%siP zr}k{>zxO67_|m0Ij<3t4Bu?rxFftYzD96?QnE?Mj9xZv#pPLjNtl|!%WjT5Es)&mB z=V*;ETARig4&TWk28n!T9z~b%Yu6+mEY2R$6L#2h?(1-xmF?`*$CR?i{r%PzckjOI ztMbpBDpB}*F_oTQUTtJvtI8ETKSB4Y=Vm;s(6*Z9X1YzAHod6wSBzxgi9F3IBR)Ua zTVhAobcV<5S)6^w_b14+jb)qTczo8s0m?b)-Z;@&;GhY12NDm}06 zG&{q>v#I9sV-_(nv2>N=;y7;qmoH}`m^cO1)%pB?eAcKAT4>MKqGMj8!mjAjyTS#3 z+eb#Ex6;$R?C9#d$44W7e0e>1?dF3z^eDFTG)TkJil0Z2re*3mJ|Up!dl7xD4gRUt)1 zv-fIk-}-B*A6-yhd2zGPHjGL8Js;O1^O7Yp2k$3{`MhgjAUER6>vG-kzN(TF-`*>Z z)w@YU0?RxU1jI=U#2@$&q9`cI09D20%`u;mBFqbjE_}8tYIlW5C_HV6o&C^qB|9)=o&o4r88X5-<@aXF|wm3#DThssX zBLjs?;Se)<4{Nr+zc#GBvvUb$>Qfifk?x|}`l#g{SsK&>Wlc>@@`HyY&5J34vr`&+ zdJi|V^f?_l!enY{>geoT_vFbkN?%H&h#`}R>$_Ln+Rad2=H?Ym8zCS$d2 zwEOx*k5#aSFq_TGTMn|avJ^_`k)CB?bjwpKt8X8E`mQr?$JcM)w4PqtLPe35mM*?^ zs}7$(Lly$oWOQ6y)3*;#<-9+empXSbQrZhmCCm>T7_1GW8=vpB;pF&JN|ShGtR95p zerZO=`2P63dabH&i1(LQ371bb8N$ zXBX6i59VJ@iqAI8-Ii~8ZnURNFu0>c)q^zE@dk)1IXo4n7n-D$`VnReG@7l<(*cGZP~IVW3;ukH7~UwpFLB)daY>koi+HQ{h>oa zUkB6T3{mB#o8+Hnsz=4e?T#3G`sU4=v9U3FX6Eqn^70k0?myTiB68KJjfI89;ghtK z6iaGqDg?82Uv72cWaOtmzPx>yLh<+aCunu^NGY~A>^ZeCsiAMXCE z(|R=4)h+QE>t)?9ZIWy*DSc-r_inv=X(qG|4#V>0-=r;_`wlh5u6D#S%@{?gc>D3e zmE9L({^>aB>cK$;cMXh;#BY!3-;mjtc2fey!1N$GfUIosBR`Zz&c*R(SC^~JeOACY zuKn;dQ`g>J_{Zm$VFUG1gXdizoVeQD^0!$J#iXTiZgT03CKIS8gvuec((5Bzs?>qV zO3x3BmTL}XW>e2pcQ5UfPZ{{Gu`%>rHw$V5g@5Uqn{-Q1yj-hX6%-UGzZAQUILIn0 zHh&$cF9}-k3z%r+)NErHTekiHeJ#EGQ7$xpQaV z-QYXUT_?-S74KJ785$Ubpa>p)d5eacZdvljW#2aOhW&%6hFWVvscr1-!#5u3ez>z* z{P>3_Yv!k(?eYCNx$aZJ5(Vl*3OhXaog4*!dC_*YU`a>f`SdecWfKoH9ly zCMF&}e7Ir>gVQvtlNqzv#~1AD(6m}gLmWGDXfSErzV_D|IrJ3m3lq|fyWgX>oM#g= zZnXJ*U-trceG5xVQn7lg{L4+=Z117oI@ol^eY|f=MuL!x%+2Rs6)r9=ln|WuL&>L) zA2;4V@h!YWzplQ1V)SNpYqFuCVd&}8Ou}p}n|*%T{OJ`yWD>^;`YpsTBkkr_nE zk3Mro>bO;8WF-El5tGB_&>>n*PR^N`8SDB;mULy$^`y}y>V>2Hpl~cS((;W%qe!NmUY>R*v4;XALTmK}SYgA0FA;VyLWLBOe1Qv3NOB$zgPf388Bk@XnT zqcjd)4-XIL$HKF`>lIp7wux=!N>+CEr93=5&u>~~{>80gWk5=Hb3eY!%#BwAx`qOK z0qm!rKX3OaU5?aduj9ufCcnJ8Q-8~$t=Q?=`lU;kW}%e;b%BFy1=tWj{x)(sUmS3x zq(#Mwef#$1TUD!JHLPc@Dag(a&D)V?3)IK8#cu;%?FGgnGjnr;Z8Fu`8XBQ)R_JVX zfHfudEo@UG9n>h#o;OD2H?$*u!{6>voWf;@#rMURzt+-q8_y`?dmsF1U+JL@(tzpbOD2Th>$;!8Gxy zOM+|l>TJ8_I4(KchRyL#Il=`a^YVWYxHZnqvGelwf{)vVALFWJZR(bi5xJ;x{9Wjr z-pP|Etq&ijTfRKrqs^Te$R|f}U=M|SnK2A^>;hRMf;I82_u3|4%YPK8@Gqum7F7m{_`RqX<6G z{<&8(U>F>O1^g@j?z>oucEG90>)|u40<-p9&8&d|e+iiQ>OrNP_&-F)RARmhBrN~X zA6U`F7`$;^&ZihEX~Um?e?^Q!TcJJtufnZ8Y#b7P=I?A{P1!>=tM}jO$!}%?GWnPM z%{*dJxhey7%r&= z#$Y*j&ycDQJTJ}6!oqPV^Ebc2EH^yk^K;TcTUS@h)Rg1gxpN76&fYy7fRh(4@IEgz zWqN#WV^~#HRqwb0VLgR6_}cojY%rKpn<642=$9|o)zF}NRd$TY*w}clKEw3*+h%}w z0&cpmA87mjKptEp$2n$z7C%+>roi|rKietbsFqgIuse4);#62yuV%2cwA|qQT+d}X zB`lT=A0%g$oRoCZe|oI;<;#o|!)4?XN4lANZd#>{F4+GCavMqiw03Zyk&=?y+jIbc zD7(0L#jnM=o4sQ~|B$rVXyKwINjw=D8ST%W+3A=hpp4zRb!*7`*YDd@9m2-iq1zyr zq!+}_!J#4i&dFUzOhTgV`)7I8B+NOPtp9E1kiX&eKRBQ*xo8gKNJlQ+*qeJ%fKuW^ zi_;(%pdUJJZfPlE8zkqfpb+pq;K6~xLFy87Nzcs0#6&JxYYLi6SWV4VM@L7} zI+{>16E9voBWKr?eDBv5{~tx&PpN2mf8SJ;2_RcWMn(g`iLYPXcsF}9Z}FSj3S^sg z{ko;t>jXVvS$X-BnVEc~p`vBqm)oG^)=vSmj2v!DW1@~IbA7vZ*|KF{fBe{HZOuPF zKfi*LGis&CZflSgLeD5QK#1G6ZOdD78qhBn6zRpCBb>kx6bc5=cYa1LXyLmVGoPA; z`9Yyf>VVY=s&+m+zZZ95bkZB;n3x!V?otSsN&xTJH6uFJHvv@UG+Sw?ZgVyA2cnXzv@{;^Z%y`QtQIN@QtFREw=Ch}v3 zRd{{uqyaPo7r4Az=lYB6ZYqzHNqv9Z{Kjr?tuL zCdEeQ^s1D7@81Uhia2+UJ2o~}|2iO^mWPKtrT;h3wX?I!kXKPr0d-=#A+l~A6@{>w z=LN>P@86A%N*G_cV*8h982LJ|JQQ1ZY{IR-hKg4`h>+Ep&$pkdIr^hJew~=8MQ@U| zf0$tO+RVsE6Qzg@4cjUGFY%=T`)XrNY;0@_niT;sn0Yo8J(@(P-Z-mFWI4Q?DGV<* zC65fYJ$|g??Y#*_At5Ct;`8UDL+{@c_2K8wpAXyGw1R@v*x1+-z+Ra+rSDId=|@rx zpO|<@qp79kKGH6*!Krg8AkcW#uHfKcUL`jQE_@3Xi^%OKSPwcnI=-`0Vvim_{x$wa zPf)|ii0#~_V~i9{rg$O>aEe>G}oBW6g%FoCv*4y}j6LSZjoN(J4L-fO(iQnVb?C}ZwfPfrvOCN8SPx^~IP z$O!!?W^tUzo=etPi^FC11!I5PTGvw3_JofUgUu*kb=MD+gJoN}>^pYs*z+5fJh-n0 zW@b^i^Uu4wvTxjo1HY|({+#{$r!N2TKxMBHgUUBs4qm--W&KLF`!~TW?Z9ZES(QJU zMD5BwKH8nW*++VEa1x;?mLv6gQwy9>r|{Fa)Ilg64uTSDg5^{>Rvx z1I`w4WF-avS{&)C+Pr@E1$xY~3u=MfgqDL;w6Q3Ty{(Q%d834^5VoxJpux{lQd;{R_T-jIeMZHkvu9a3I3n`%MS;b14_!pRfHpw*7;0RO%bUCCBVm^=tu-;}3z*G- z<^WWhb?kN7c$L+_z0!OFjXc#zf^$4l8ajfZjNJLGVuQP{xM_$TCy^Zz9^=Gm{G?UJRgbyYwJCX%IJB8INhmX!?M6dDvt~0PTH7b3Svh|C$f;+?;Pndx}9P0{U7POkPm`u3K92|Ah`mkM>qg)huyok zx#;@!Vee`kK~_|h_Ui`}OF_GO$K*Kt)NQKie*Un^fBa`vM@QR(WO3qFpn$DPt-O0T z2Mwk8&YhIDKxs+I@T4TRsi`Uat{HSlvnR2F%!)&AgQNa62N=k@MztC6!!5v3Lbsja zZuuTGH#?hpt}w-U)!!}=e$QuI5K>x6NeP9rdGlscHRIyq7)yRJD@Ok_p$u83{jQxs zU&t}Lc@UKA&~UhN9N4;N_LHn4X%_zIm_; zZRc^TKt#ho7nmRU5*lyZ+AWL}^dU`rp#RgS));|^zXE@L3lLS_ugWRlg~EklVu!_Zt`Hrcsra0j*ZP(fvCXT@TU8!QV9~A(&~~;&@v!jQVOG zKBWKg<41+}7w&t7$wERx)Igu8k*6)jMSg>>9CJ^E)> z>4oA;w9H%-S6A1)t#q;*HiTzq3rJ{EnQpJCJ-G7Eq%pl|Um<7za0MV+u~B|gVNyug zZ)6X^-X*@;1d(ZO-*HY^;X)J&z^NkkoL|uxaq=?xDO_|?zs&o~ zelQnm%KAN*9=U~$Z0r*4@8=OEvmtTb`gCk7I~P~k4Y}fqu-!j z+hk!W4b4DniUcJDs)EN>!yHBHzsN)RPT-t3(Z49|<*rt^pSO*o`$GO%8Hv2JkY(JU z3sDH`Z_nGoxtGyU`OmVbikhDf*t>+)Gy@pvf|C0Z3i!?`7&znzCx0KI%h$Gq%k%vs z4KV(!ygVAv0*olXs;Y(P;OHn56cZGeoPDKd-o!``$ZWXao(pdGRbHn*459?5B>5~aW9N0 zk=+-zLW9Jz@s*A4Bk~T49uz>QJAtvB(iSxkhN&pPcIis)g3DDqHve%`3MV9`q}+*= z^yV%HzOaOnUsz}hA=9z%e&x=WK;41uW$4k8o^OEEC<&J?J)zc^qJ?Us6fh%OS67#C z;ev$}pWV)#%$ttAf`BY1SG!z&zJ~hODbW%}_Eo=&K;gs^YU+V^@0P$fBlw57DR-PX z;HK$5%T8j`C9?~OmuI=gzjg zwy(e$oCK0EyO)4MrFG@?>lh9R-~Lm5D}SG45?|)bVTHqo4->iopk3iRAqd6CzNt<0 zTibJ<{njQuho-do+D0MZA7@ks_mTB{zjHvPp5?DEJJc*PGIJp@F%*5AR5AIs1Er3k zukIWXTDLB*_x0%LC^aQ1DJk3mw^PY)N;F=i!{5)iLcr8gGY;+#!>_036_-O@P@U1{ z`?HF=9APIB1PNWXB;cn9Q2=3}&3w9cGXc~eRXok#Uq(iTvUTfL3Zb8ojOPbTSG%=0Hyiq+EI#2? z@uJ66Z^o>Jz-(m=l}&Hw&ITM^&apkq!xfnDK#^7TT1`FFHiN-x%1o9bR8_ zd_1A4HsDx2@t#orDgx&=QHW8CDmh$BSM2ge1^glWyM_bC{N%-pYkCsc$ z1NADXM0f9q1IlI`*aj=qSALI4%RfN8L*!fbKxFd$DBkSsY$g96l9YZhBdrflE}r;$ zN__P{!oaMsxI}M4QcuDlRvRe?6bgfo5LZKcPU$g7w9< z(V2a@njy#EPy?T-X#|haB%24^5nHIr6bg9V&WkE3bk)okzMm9l`umHHg(|%Xv(E)| z4Gmd?<|lM1A*rdWASBj@{i4+`T2d(do8aU(eFNA995XUDR{!}RKR@3ZLZzyxiBw>b zo7N}eNM7SBJ6RX4pcO1v>(Kqv7xEdWKEGT^MMYIapYto=Gx=ONFR7&PH!0RFN$bc2 z`;)53(O;ja=Qh@>ymx6wu$pOwz;`*VxUOqvLSkYJl>YJYIMg@$1o zK@doz?a=Tr)|r;%Dj{`stC{ogvQC~{l2MIZ~|8f*gb!Srqjnc_}Mat%?MKtb6{vw6h#m^fI`wJA8kY6c_ie z4EY0b1D!E02sb}$X{m)KGco_ub0GNRWN&Y;(Llt8_IyWr!xT`|*&km{gTvbN-9JGD z@wMtfu@DJ!&_F7D#@B)u>p*%hIX=1}a@0gyf-O!O~c$u;MZ`v0oB@OLN6XE0}j9 zA3vtt6%}*m(um}d&I|}Hlpi9U0@4W$g-b1gut)%0}j6@83t_k_FJ*&>reM5R|ad z*QMYdH71K0ui)m6K}jU{)A89AmQ^wbnMFiIC|j}Ei9NNHj?NAPK@^qUN5NKsz?ms8 zZXaHb&KQMQ#7d}COMlzc3=9k}ZaFX=dUAmQB$QHv)&g;g&ePMA5)u_fhtz@l{EWNT zSnq16VgYtp-~8IrdJZ(|-Ap^W%h=<@zRTso6Ym=R`8+Kx-^ho>qXRJg8U-@HLtoDl z!L0n`(w5U8G)pKzuTrJ&1ONI~ zKQLnjbLq+vt(LK6H?7Dr6SV3=VYGI1jD{Bz@Z;rSvVx$_;buDQ->LU~{H@h{Ka2qt zB~98=^Vi}+N0q-1P;b@qFZf6aCJ8LpiMgbUS=c^4m?l27w_i zmxpE#iClo1x`YB9{4@lD35KnIxSIGo=z+e7Y83F}nujmmjR1+!#5K+gTliUWTR_BM z`gH`|;XUUX4h~*b-xcUioclGvxrv!N^0qp-;H(1(OvdQ(PtU0gR5#HX^lI966`JCj z=-v$uPD~E($nFX+EOf4%n}ofXf9+Z>jCj{Ka8+;1^X-7QlhU0AH6k*S77Iyyc+tbt zGchIQd-sl(Zp}{qA;40HzN-%&1Z50FrKFhVeNi+WcgvO6ikq7rCyR|jL7f4t|N8fm zT9cr7fY%5?kBVQrr4A%3@!YxD?$@iixlt+DuUxr;)WM1+u}Nv?>OjI^2$0kb(3Bg* zvd?{0Jdig~ETebIPWWvV2Gm^{OffHYWDN)mlsoi@i)b21PpmklP2>+~4wK!1tfw2{ z_!i1tUxAcI_Vw#h&$#m>!AaTc;=pOpjaug)!$;T0{0d92DU}{ zDeDSDQvu!m`_IYJ6oJ(Q%?!|?^#AF8lz9jwfC0&ss=VMu;BITvZ$K$U zrfby7*Te+5Bo=TZWHXimk~dx6ezy4V)3qdVqgi?Rvhee3W_v26FYY$->%0wp3i8Af zFpsp#Unqvy-$8W4@xhD?4(R-ea7g^Ve+n$Uv3YWvvGGb^BZo2zs+fCE?yvPr zo|pI%lyc!h?9hv@1#95RZY~`@%YlU{YsM3gxNmv+O>M%srs)5 zw5sr7_!mlnvt9*(*}p*PtpRHl_>XP}=&nO1X5#CBma6Yp3g!#jT@F*>sp9hT50Ww) zH}v|xO|U~b(QFg6va*U}(P{|E<(&@@P(N-*tfA zdf&W35escC?DT;G@5D6Pqj1ofPbO_f4u&ro$*5lQ(`7%3Ueo%z1;j014b@y#1YCUWQi0vI$ zcJaVPD>g#VHhX5q7kVO{&v>5>4qosb?bY#lJ}Y$i7*M8fmrsqX+_7hm?$M(~Zi(K< z#I^$!AvC592mxV*`pA(ZWrcNDnG}-^coWR>5Nv8_Zf*KwE2;(Z5^VrWG{0=H^@|y-4)wmTCI#fwe#!tp|Z~?dX=v&$*tDjvmr-agnyQ zwRM{wv&8vcecqCYr~)#S9mO_Up!B7Wrh|U?3C?;kGc)(Ue@_b{O=(oH6BZVx57otG zqdK7XL||S}iH@<^p|#-D5#Fk=t<7#a?({QRN>Oba&t%U54`dt>)q+n}&myR7njJC- z*@?grXwwfDqxrTXW&k~yAP@MpGfjHhm$N)PJXWk(BNclUtCcC^ds$go@$K8h7yi1g zQWg5%P!!54+}sW%5-5$-9ewwFmIG=K0tCwgRD0b@U)&J-H#V392FNEkTk3XQKhSz! z?``$+wW^}Lqqr4zLk|-JvRAYel+*G8S0e67})saNFhE<$4Ev3Fd&0NxuaY71%kB1mQlv zyxrd3p1`oVZ^O6KIfY{2KO2T%?dk2>)u{M3pq`T^k=Or7&BRd4O4#7nUR5avJ0rS; zMnUXn;byFA)|D%TN}oS``joSKH5XSFqOpi-5&QINy721H-X?(==gy%gnsABq1HT77==fAZctIUH#&zyUPju2>icMq*+}!jXLT%MP7k4_M8(E3dnm6eDr$(On9jU1; zus#)?e|N`@9rajYD^{)yLsx-6&M*oT5pGc5(ZNc1xNM>Q(Q!jd(k&4rS+m(&d+**= zn8M-RYp}`zd|20R@e7BCimXVBmqX%*>C|&;UG(IJ_>+RgiKSRrKu!t{^dM4wzUWZD z0+kB$ABJ-y*iuaaz zpu7_E!`qvdlFieG81KYLhY*keFZ{Avh}OjTfihX!(!v0RY5YzVp>x5%=HAkcE7qU6 zB{!5m>$=5=dGm-R5p9zEnH9zBVW+!&>|AH@AYe7{W8I|6@lRQBMM+Q$#c7|Rw#o42 zH%63xAo>dTL0XsIickOpf~(J4BU~xJQin&(BO?UJ`BV(4+KFfQG6b zHThmqKCL)hC~cJlf!HR;09^+#h(y{!iq^invZ5jj1{wLy!&?P%_B0_D+uhwg^5&i* z-gg^pVzHeK?2F+CRKp8U(I&HKKiPGiFihV3v%LIQ4DpED09G2KN zGLm2)Sc&BAM{om6V1Oa2lPKB@uSu(57}PPr29dNRaHD~tAp#7cZt4iR5ORx= zoj@>IbJwoxyY6isFqQZ@eQ|4d;CQtY<`n4t} zJJ#P`X;v}ekI!!-xuy{iaKHT)Ay0q@O#t4k+}w0r%nU1A>2H(d4nJU}{mY{xx55pUCe0ggfvIvXaRuZS8*-rt;lkNt2J$tSd z_v3 zJlw$nzAWc}8Y&7>gIub- z8X{J};?EE18;_*6&kjP#=Gt_W3uuMN2{xNas2T*b#A8?=5dvoT;o6YliV?^=+FQYj ze>TLe+1x)o+-Rm!WeUqJRmLi=qCyEp_Qi`A*3a_|(^XI0GI$TY2NA&>YzZij_CaT#z=m_-M6r?NC0$1blnycHOJFK?3lZVEFwmKj>G6AR`a&{hljL8dl5pcs&%K}f#iUh<;H3QCZ;=i=tJ z1rDxzZmI3MdEXj&d-{NY0NKM&cts^72C?;HX6)X0R#8zb!^-s}$WJz5prjD`kqxR# z0su>vEcuEy4RU#)q8fL(ABD6T5mTF^M`MxsK_;RGD_&?@taXG$1@hHdSS`qBcMVdR z4)h)a93+u{;AOPTtb&45@bdamanQaJ4W7Js!G&ZlJ`k3e$eL-dZ88W7U-#zC!gVS- zgZU$$fnUK$Spdc&d6<<)s(Tt?s*spcB+sy$hleK8Bhm-&M`M!6_veek=poV|gfATf zgB9>evh(sHpv&$Py^RS5$!HLjP1A}W5+d%oAo7~nmV!M)hZ`j{@+!E=9u08L-JE*k z>~3&Nks)KOZUQP=4STFE>Lq>}`HnPcX2p?>yDASpX|AV{nJ-VPs(M z-&Ymr1vG1bA4QA}uTRfU5!*86UgmwK`B8i%{@_g&v*dXEeBGEivBRxp;ffZSE58}m|kV8h75 zTz`?+)D0Z07!iCz*nx6>5fBL#D;JvRi|XKN0~f@;$hNh`*-;q`Bm+V#IxrjiM@QTA zg$rv&kCL>prSHlJ+`>pnu z9VIE*&u^>wz=l;PO~k;dB0$dsp~Io!K`$f74iege@xWpd*0<+Ay=IF->fo+Tg@DzIq_BKvv7?+R#l^CZ;L>5mJ$(EaxQB+s z8}Sp2!`MZ_%FD|PshZqDApeoBYn(xIUldD0FCMw8M)x&B009vX0SBPprG5SS^$Dav znG`OQ&PuY4dyC9NVIc@&zu5ct?|<)ef-#gWk@5~9Hzc#^A*6}`UYhE7aAjs+@8|Y;}NR?KaviH8eEDFCs#x=-StS_TX^oG*)3fR%~KwY9uDo0Mth7f&~molGtZR?b>2V%aG&sf`w-J3%hsh&;n#7KnC$$(%FH}>It*}!nK3lO5DZ#NrbrV zuyW+v$fgnKl)T7tfxK&@9Eu0tL+Fv@(jLi!XwA=md#Wm`t<9`Rp@51LM;Q309p-Pw zaexN06tQ%E8B)i=BuJg@T^d3_4;&S3Jw1A|^#*Y!#!+A^H}p-&3RfSc@&+$_=LFP^ z1I|*WouNzA*R|QRLS`L1@Hj!SJ41ZOzQT5s04MRDnP2(A% zL^f`t;&E+oTmjzvwad(^7JZyj47`T9o66loFS=Id3L@&nR@)Z8_DnT4tU_RFA~+DX z?ZD}qG5YR8wMes*>POlope8ms$%3dE@qO0T7XAhNGO&n^cJb&XMx)rnDPgv2+RoON z!tPTkDF#CdT(uTWn3a!D=JpuSGnCFb$Lj`M9wu>+BM7hnuF7k&ERck526vor_#;el zF+qoygplA>@UI2M^*hr;a6Kfah)c~|-vrLvz-bi_3VsJ5`3@Z{1najco-e+prp9Bs zxvh-}yT*=Xyp>6TZ};FkZH=_Jw)S`)P#Mm(85h(JIn&qQzy9IFe4}#1TPfRMrXh`c zy1m_iY*)f^HBZCz#$2&MrPa)Dhb&8W*@4e#>IY-zbm`HTMu*A@0HW=%T@@zJ$&wN| zkZ=n7Q`(`C5%c#!YmlhMnyv={kI=Y@uD=|N7r!Y4lPM@D=vQWyLi{zkgGx@xJhOk>YEEk-xjiwb-7hG3 z=F;2;(d2alerjt!7)~F*;rX^F@Xhq`C&#PCrUkdSi<;fwg%HsBVPSrL2AMDTy+lD=0Bm5kQ;(Jl*bXIt>-gBB=QL&uu_-fWM@$oqL4h01TAmDtT-`q-J+TcBYHW&a8)fGu)ePjq zZacKC`lyi%Jr$+z5WM0kROog(lbi2AO`*KR;wOy2+y&g|moBMIlk}+ZG>d)sP!Bf) zj>i}OeWmIrCc`0QvCniYf{_tcmfWfv%Vo&_E&%IUPFA)bc7#y|ZXlchTcpW|8bZ)k zSJ!Gq7mJctSE4Q6VglrO!0CmCjiD?ys*H+?>hw+#Hx2o4C4EWgR{ojN1{UA3AB0Py zY!TFkU4AwYl4`z5uEa9Hcm-`WHoo03E1;rsHzb4#`8gf984SyoNl$scdi4qsr%A*l zWDzI?jv)C0Qj4)6LV99UOgqk}T9E^B&}jJH`nDl7cZ|3SOzkaKD8Ko&^9`d@53;e zG8?yiJ!om7=6!b^4JJB*-Jam+J>BQ=cmqV9XaZi+@TH*wlfB&tHH#f6lLTW=KXGd~{0AOwb=3rSl4N+~vT5H^Ah#wa`aMtO{;g(d>)*6Fa>P=XeJ9!44D z0_%oz>ow9Ix^71*yYs<$m9f669T|95!5#Agi%JiwXRp#m#n$u5lc(WuwWI?8g&gq!82dV--;e6`z)dU8Xi5y zNhq)U4N4|%S?RIYKXNV125w9)LQMYvT?YFoLcrV+>Nqhu)H1hE?^^x}Ha1NoqfQB$ z(wrw52(MA{%}a8OE@CI9ID-gd&j`xnj&EBUk3cQeKi;Q(^r$2nB~}XCs#V)CKh;kk zS?rTi!d=OG*FQ3%2ca5O-F7N%$tAVG)@W8C@>C1-I|tAks5W7!H=b}GQS!MqdrOi4 z3E4T>+iT%PJSy+E zTbuu08TT8`q{$&{`TTkE&Fa~6XryFOBLz@$>z4SW8oTEtg6(GAhLTIIyt=|a*xp6H z*XSiawh#6?@N>}h<9QH)z;utkDrJCTfyXoK#4PD}{#;l=;al!0UVcgCn6_*0`SIiS z_Vz^NLK5Av=f)ab=lD?vjEfMY!Fp7TkC^PCgPg~o2&Yx6OgSO-82W7v?kUq_?xmuQ zPW~L&dvF+u)1>n|ZMsD?`+WxlgFk!i&}n@A`d-M(J=>Gg(#&eizG~%QhX{arLB#~L zjg$cJq)xZv$8~gdqX6cN;q_?VOslR|Cqxv8mLw0co=&@Z)E*E>Ys=eiNM8WP#{QSctStyp%oMoTCL)hpLAZ|+xv&aYixfwm6x}U z7@C-^jm^zbG)${7!;CI(GP%A#1!6RuqU3;$!yd{gKbf&9@mn$N-3PzCcf-cwtvhyH zGhm{_jY2e?w0!`OMmD9^RH@TYC{YxG^Q?p3KU$3qT`|I~bTl*_z*AseB3syfyK>;z zj^WWA$tWivN0~;6YpXUy^LkYBR$A;vN6QYj3i{RfMX{@;t=-Rm3AN0~PY|ua3 zHjJR*(DhgMu-k`r8Sg_p^x^RZWlu(W8Qh#xoX@1HB;7++f5ClPt^ziVd zli$9{DJa+>DM|A{6{;_Z8ULF9wj9JBo@6pq0S<#}Frdj!W$+|OSlD=@<>ppA~Mn@GB{$D|@~tsxfI^FxB^9n!}O6Ll2CdIv1xt8XtOkX_up;D2&=X%?L(c zKR-4uF43Q9oDPY|ahuz6zV7Y40HreXh1Ll`9wfomX#$b_>!yJ|PCAM`jAqnmy8 zD%s0~vSfDa5^Ogc1O>2p>;hIVI$<+ZpqcmQT`#LZLKneVp*M75=^>1@=g^@mm(QV% zaRM&m4CC+KRpHrmG~tF0#)afjps`xlo}?kp35Wx*|BWFDZ{($}?98YGN!h+*N3O?k zYpLtoOZZprwQC28Jdu1F#ING{HOVlZyu7@yEAV}qRnUp+`Ouf@V^=GXo$H{a+DL&w z@F(~Sh#D%G1NP}7$sxAT4+eHNJzdYkb>|X3Z|7DshL9zt5{#na=qV&shP%hfA{a|N|xw^|CJl3HKr>xp0bLD{FJ`U5%%1Dt2flOnca6C5McVb&EenG&23c{T_cyHaa zXJ?VSOv=dE4(1A*4cSQj7T1=iPvg<2Zo3U=!i{|l;)m+a3GGPQq`=73R7X1P89W-C zR8u&^N^)YLt5>l|Q9fuzmN zet_L$6a=*|LAW>#ik*Ao#uF?gSS|8U8OYNhh?k1}v%J`_){^|_>C-*?OWW+_vEZEG zLBYl+kAE`BHwZ-$CTka_f^gRZ(GQo+dl9rd<@(@fBpy{nRsbv=`HWbMbM>vKP!b5` zhep`g*LMlZ2X_AGBQs7G5fQ7j1tMyQAxe@)Xaop=6A0|P_NXXya!3kjdb+4p%6JqO zP7`3_0=%8bWoudyV#>fH5;j|n@;*RYLaQib|1M78|-lwr*MC< zT##@%ie1PHVNAT^npoz@Bbc%sb?v*~f^CKxE-p8cQ;QBgK1aGC>@x#Is}P!RM)sI& zL;@Tj@CBIwq%14p@L@(AMvH*3nud9E3t3?xG?LW}neuYPM7w+ViPV?j^G8|`tNIDh z%OuH7JORgqW^8rn#1q2$e`=MW!lKVMlZ$n8qx3^dAnXb1{q~8CF?9${zJwly+&*#R zZ+h*ziDv^rD#wzAGL>%h(d8$)P~Oz|$!VWG)e=yu?Q!ljk`XZ~hi?@Z*R-^p!dPVG z=8_#4h_)OCc%pgdK^PX|rR3u9@Q>Yy`4h^5;tYy4edohB^lFmL1k6r;pTUR-(!mU! z@zWeieVg7G89w8srJ*rI#sCk|5k18YT5`qn!xNmfS&K7?lI_e?vwVfpyom^RNW_Rf zSKRKd09j|pT*1LA6(V&2ge0kvUA)-#o}nJB2B=4yg?}+ERRQV<=HJjZpS62iJKN;@jiu$Nh0ld_|4KRzyeaGaCC@&%RddPkR0}@1$3_W! zzH2w0fN(bS;e(yCa}1s!h9zdS>=S4zY+(prpoII`lrRj2W@gRx^U({zLv(h(|ktiX$F2J4WSN363Qa;e!7Tl@^QLgX2&w@gSQoXTGza zbObktBbYSSn1g-h>&{R`=$Yr~b6~ZuRtr!9XXvarh(n-$+x6Fk!e#8dVYwN}mPWL7 zE@cm4tw?48Rn-fqqGZJ&2)YxYFKo9aiw^Z^$DTU~zmVtCk=!?`vFMa49ygX3h+7*A z{RZ5C7)N*t!TiyaCRl13PM1O4@tCg|Oc)Q)w0PUbf%osXAqIg}DIzW&0~zO%>WPyN z9;hMjMI@rZLF0Z;Y)v}^!W|)9$i4*6yIO`DW(UI)&lbddsOC>=D=^HZC07CU75m89 zuT>L&_S-ihVd0@W-OsS9hG3<1T3p-UhS()ZqN3Un3%RwG5jj3!IuH_VWYExC?B!SE z=~Og}!R`mvpRJm#s#7!TBpauVji32=9KWa@ym;igftAIylcOUv%V*O`ntRuuWo2VK z30Z;t?DY!IE9eE}X=bqHp8$ErUA(vkg^%Po_2&>}bim2u7SX}kKst}yOGM3FEy6CT z9B1@hH27gUN+saNUibR)0df6v^$2=;VsFNSxv|Z88XrbR9%6nmel{*KFv&7Ve9Jym z3gcD(HQ?^v*tFGcX^u6# zZ10aJE*Oz22wJFb*Y@&d<%`-o`^#Of+wY%#w~-_mOV=Puke8QdMA^M&&BgM#(-SI? zzZ*iRsPSmDt+XZ{EnU;5jnX;puBRuImp2hd;OuMDZ@Sq zorK%ZA=?~ie1XN(v}?&;ebmbJ8#mtB$_EB<7UT86!wO}BMvd-im=(ufX~+~O_vjAg zV-uc)g6{p;+qXSS9!ziCuz~Dr0<#o1T2TCQ4<3d7^74b<51`6eVV}x1lS9_lUpsztW!(b>qr{J9mc+nP6WCh=g)69?{RW(a5QiXXv}$((O
c>xs&d#7G0kTdvDn6#hB)vCCoP6(~aM5gH@|{lhld6`? z&%+zkVrbsBeZl^ipwKB*l*@i}_RRS+ct}bU{*UxaSW3V%cfG%~f>z}P8PM-RR6!e@ z;bY#6hTlG4KV`KR;Jf6dvFL-)u9!p%kh(zHQ-|?NZuE`oU#pKI_ll=M3anpG_FvmN zJGUb5l0xfl_+-0Ao_-2rx(}#C)kNREyIuwCyY{iB4TC3|GX5AD9@aso8}&0OVc^rJ z7(5sXDf`QLd5LNNPh)QamUG*-{a#WeAtY`UO3|Q@sYvsn5*m~tLPCbjQ$(R6DpE2x zOQA%lTPPJuXfWPFN|G@|X5a73{XXk^-?hGPTej^P=&Jt1c^<>Q@5g?|^iTG6ZaNbj zY!kn4+vB#O3%;6caxbo1#wCVaxG);Ti`*Tbnp#W&6!q;L&6Q@C0k41%1bNIBAfA6T zk4Qe5ka_yFS4ot%7-~{iNEiLK$@nruS%QxBL;qHH~FyYkxFOyo^@`i|PN+Y*E9`OiD5G=xNq}LseV(9%hV>w0~raMOD;U&(ecZLd6Uw)?{UDWHeb@lCTL-Z8P11 zL$i>@G6Lf2_sx`rNrt;gg2oC-)B}tMY+;H>`}y?)H&YFLje~I75?~Y&#~a##DaRpx zP4kGhmf(kI#S(}aNIh^mumqAKu0qm9(+&t&6J0CFXbr8Jb7{vlz4Ee7D!RPe-TlOp z$9n+sX%ufk1@Lq_QkC$Qg8)F3CR}@R_5I^Q!Vh?fn!5TXxT*z(>i>#+w7 zJ2Nl>V)k)*PzNTgSUm1YxwU|KjqL_>mh5}cMs1nwmhAV%@q2rx9X&dM@`{!Sm7I5c z{vw&!{=IMOr;r$-pudE*tn4!K2xMcIUcGJtZY2A$2N8ImM&;9`VW{#ICrfQ*LuF7=*o>8MG2HcNR7)B>Vs7;|xqdal6+v@Cbo<*%R!O0V^ z$dQZ@IdqPHnB`NzK7$mKr%e}_A+m?RS2S4gpjgV}^DT4$$&#~VR4s*OQim#_B8a8p zt6InUCVk`eUv>i5U_+GZ`WW_tHaIS8BZGXRF{H|XE^*Ewl8Ba`LJk_)Zu?qmUB1c?h4d@tFc|MJuDp+gH%qmgX^gS)2wAR;)I-Aeet%}80)a@|R;CkK9BM{9&0 zFRBDI3W8LBr{xc8#V1`CkL)i4^we2n6Sia3Y%KO;;d`V5=HsSrs z@CvEJ0pNlClZfa|Q|bME=rbONz_f7P!L5inr4NYKgt!Xk#{P|rI|BPhj}9Tb5S$rs zw`qHI_3Y`*6t-Ub5hf<|ZL{|KeM5SB11*TTr1QTy^?8gmH*5Z#@R zaEuxMO_n(1g47F?Swd!Ek`m^iWp*h!lL^|tf)QBpoln_F-Yiub{19S1@3;^leUe}3=iGVa zPEoHq+&VEAfyvI|+46rGY!RNP3l`|Cn_2T;S^x)E*W0?Szdx_9-yIW^@=V`g@nd7y z*uP9S7&B+EYs+4Gg zG2IYK)&dd_aEhH%il=c(43TZ7;K_{FI%}T}6J?Ebto{cn^^lX#6l#La8k5y`U&;^Z zV`6d`ll6-g7p(fYe4DAQLpffe3 zTgXK8#cwZl%Y8PD4`q6hTHAG@L;S!)Oc9PdY-|S_Y&{p$)V#ADDoOsvR7N|8aOZG^ z5x-0U!9^Spb4FGX=c$QkP2@Ce6%@x&g@`%@O_8Qw6`RKyup&96AV_UlZSIg|k{fQk?*&t;xjG3q5T$3+Xkza&n51a3OV^n>CYDwa@)@`nH(v*D{+NdVTZu# z0`*x-_{!k--Nb`wr#@hlLr=A3Q60Brt0}Bsvn5YWLAJ2aeL~t=%Toinas>(O4(R%u z=y&aR)z+%>BjRQ&nA?cknvhm}0>LqScr3~@a%d}|?+1_);SnZJLeJ-n5L}|RZ2gQa*-qbR zub+WqC-e;!EGS@W@>&1O*48{c*sef;zo=}ozf=rh7(=7s;HC_AnfN^ zsC&%jaW!~jxRBv*3fv4JZdVmhU-KSxj!Y1~#Qkdkd|9PP66BI?;aXBm-Lr=k`u0t9 zerVGoGca!_+|qyjZ1ybgY|W>FaPEx2FwKBTQeh9XZCg7Do=f#8Q)Zxlef`gw?X=(< z^Y+NL@5uB1+sQ@uqSGp{IOOhOTZ`WZD%4y_*_@MO#DjtpcU*BrS%@N8z06#v+&=3h zt0LZ$g_u^PPDQ>B8+i7`&U+{eO9933JZnd7DBO*oKFtU079s}(!7_)p_kr6IoPg*7 zfVbCw_8$TUHrwDq_oeB)sCfcUE=qd>EHA7mnWDF@F7FheFv3I+2!pOvyf?a!?H$+3 zAZ|hq5_R&xlJ}hl|9Q>tKY#8A-a@xROBM>d15A+M^-DZe3bmY-uarP3gT4hnWTe2$ ztAM?3wCZ&aQ7+IMU~00Js67*{Hs}T)Ot3T^LSY+S`dX%FZ)55)@jEQ z5-g}68XFsTmMA_DHHNNK#KpE0p+3$e1EA(@VPSPdVOO{e*Qwf3is0@@DuqQP(}+Si zbkVagrh1w~8| z9f~jr5}(x97jqH_QTn!1rGHU8^nIJa$C*T;u=ZahH#Yx($VtHQ@f5VNbzi;b&Lybq z8(1@Cj<+`xa<)iX#XZT*I+2zZib#*#rq_5_!HB&ZK&kDuaA7H0EC>2lf@v4sY5R7- z&6QJ#0v4KV8{R&!_E2Xca33ub@Xk0il?iUp*`tEBar-);$a6t z_B};jQ%qek4=qTd0z9Sl?(->nM>JsI7VT?v`*7!a$V!s4&^Qc{AKVHOT@j7^7V~%2Tiu z2|0-aEH-fm!ZVVbHWrDM!?a5XAx$GszCTqx7fhDAMJ~4SSSsU`wzM{$Ig9C5mRdIc zFyD^$(9O*)VTg%S;Q;fNjMf22DFJVwA5;~1UTuO}MSyz~#>NPT3|?aX&6~m~F)DkT zH&0YdZ6K>c^s?;x=SApPwuOYWz?A}hy?F5=j3+eG^d`H;-`l*YVlm(6bqHS>*qIdCcfm_?=@E~+3m7E% z%JqHO6hLS%#DNo#-mvb+qx9XD9;VxVhE2YCzjgj)vs<279|ft9>I-VNg)h6)WHbj} zwzzWjs%@8%bXQl%sM*Wvu;^CH#-%}ReSn{Ok9P7}h2NF@2gbl`}S*{BjP8$Mo@H^4kFbUjWhA2v#RRPa_Mlc%YIK!?%4j_#B`6zewTk7 z`OJ|FV%=Q)`T3!G`&f&XI;M|w$NY8XBwU)?4dr+46crS-#%`mgrbcXd`ZA5tqd96w z7D0)4TOUw`y=SsrxV zW&Zq^bC>)UQ0&v5mG;v`{gTFo?p0Xi*DJA?IC29w0rDZRH{jofBb{B-znN#G3f` zFkI4y>M)U$5(H~PEYg0%)K7^_c&qhF<_plOK*U}JDyhouy%kPt845IfQ0=|@rPS;Q z0*lP8%*6XL>TI{#OpPlQudF&S(NXMN!0LnUSAXMz{)j(Z0ovBmscolsQx8!XAuZLC zYDs$~z#vydtMuKTP8t5Tjxr0matNJN8Ok1T)+bYsccjQo_q;UD)m0IZlK3*6SIyhE zH)%`+t8(qyR(=LHFW2D$1?B~$KtC?rxp>&0tUVI(GwM9HsdeJkiB3~w!nySEddo|= zT7g_tuy3h#kb`J_{`K+0hockXrHBLXs@(pwPySTM5?Cg{xmtcP~ug|&*NV+y-QTLW7xapRC^%|EfZ&Clr znyh5c5V4cMn6ISgxHF<-;k@$M|T&GnbIYVESe;e(!qBEsV<)_R|!pVvLXh6v6efR=VQN z%~Xh(KsboBQ^uD1!T?7F}($s0UvGZY!E0)@ImHoTd78JART2Fafu8XUDf7M3i*F(I$mEC@RjP04WJST@A9}`< zf^-v(-(aPJ4~J;JQdXt_{EbExJFYd}Ra@z!y8!SqTSo*3aG8a35Y0PiWCHDhL@a00 z%V4@aQQH!wE??fn`mV@XK6j~;w5R+#10{h+fboG|GPSsgA84@t%Mz6K!W)DsXI8}p zrT{eNqh6W3HsQn(8-x`mlbsT0TZNCet%7zIUEPF|UBz`DMr=4Hat>83kuXBNzDy~1 zKWc^L7(4*t>x>j}#$@pH@O$xQo}RH{3~c61%7YG}2j81b*olyd$(~;~8%$*~`a$ng zgjc}wj@KVE*Qc7oW_Q(i$g{Mj!|-!V&L`47<3ST4*G*^rs`^-xFK>0Qfx)-ek0vVC z*pNU3Q4h4*&FgvV%r)<4L0Xu>PtX#|2{F51Ko1;vxZqVc?Q2IVBtdxkV4a2e(5|y@ z^k{gWGtJBB$7YJd?HxPIUbK4bQaw8VYHiUPRm!l)jsElJ>tG*;dzbzA1Knm_{l3yp zafH)M_EiMImNW430s3sg$jEqCs0LV?UW*qG9BJ2hm(pb!qv6~RrHVUxp}#A>1HDJa zQ3_qS^((MCrpxYH>!n__+!7(>;ZTcEb2Wu1=g@=JLcGUaQQd6gL9jBrU2vBN0u;($ z$iHO5t&j|k6IWdLchhNe&YvR2VM8H0O~wg!6^0|_zzO*##dJAij+n&&H@R0?_1?;3 zCX;^+<^z^_t*aQ_4O{t-1K-jAta?2uLp`zC1DUm zBbF2uZOtO?GE^8%WoEwbes4tV`D^c&m4;z94$CH9jIP^pamvZ805&DBSX7aj%K5Zuqtq%q|&vkyyEoclI7EMPo=JFWSx`M$Yk<8XzXUi`6 z%97=V`Tz|QpFv6#jv7_Tfp#>E0_EjCgh3INF(kAFVP1%XwX}qrI$m7j zbef#?Uk!x+74=;JNW>T^+dtI=M>NHXzgv_L_=`(RqfBb)Mu?w6KaF5Wj6kTUjAaW6 zt3--FaNJYiv%QSh^l7Ilq0SUR4kT}Dt1oT%HXGV@;QDTd)U3isgeuDk$df`Nht7f9 zp%8{K*7fCqMD=k_5Ko5LfogO{LK97+QHZwpgyOC0MaXDt>7Bq(yFI*9&w3cT`0+tR z?#%25Av|E0T7jtwXKEd{<61>-M|tt|Z14qQIdOGjx}yPPTOfOkRv;AsS%g04V~nta=G^(*T+jeW^b{MTy+vFqZ1IX@vv|UDZmA zU(DuD6&~HFC+KLM53z5N&0}HxaBggoWgn~X2sK|VlC98>GoLI$8@Quc{U5pil%|d5 z%yT9(Lgjhei2qh4R{F&MR0|3RH$I8G6m*H0A>~OGJswXY6ETgFH*X#IanY44HVt<; zNlA0?3}&MR_BLyl1+=M{a{!25>~U11U%zDppN1i!F^s=78jlOHsKyW@(ium^<|oK} zo+9O-s00=550phRDTiP1`Hi)NaN%R+U?eb>AF1)GqxD1Z9O6Qhb2kQdqKMxBRqu?@zhby&J)TMvAH2iKni-KoAGz5_6X3p0&OE?DzjR zVoRYiDKFlTQ)_bZvM<1?ZwBF#eRf~ zbY6>aFLm1avso8dKct+t#^+8l?>3v<&c#@y?LKs98W`tzU^pU6arMumiS z`^q;N9yuSbl#T|1wqLDw>qj@g*W%} z)e-~qLQn~31&SHQTY~KEy|ndujoAUYTTHb96aF+wS0a-3_oPe1;6m69(IOW9{?%-X z@kw%zwsiYoN!aoiB6}2)Q#vZ-sJlEje6=oqBrL!Fr^Z`m{P&xU7(_>>{) z3kC!fPedvT?&pLUsGY6!6GG0YR7;maBj7b}o~-eBhqROwTN+(#OiaYebv!rCQ?mYe2MAw$CgR7${zXqrpRdN`6$l|XK>OPQ(c%pzqfN{E=$a;>|FHjr zd~2MW+n&hekXeI&&6+H-Ex$xeD;F2r>6kK=2B;}c{}HDB5~^)nEY@N_1wIn}5{(?b z%z^RoYDD0QpG|3JIhgXO*{JHZC(GgA#Mxx9Qt*`8Zm|PBvUgThRdrEyr$N>v$MYh@ z91vYK=VQ!-j6-^>d&kkvP8^ph<_&0fxo&iq7o9Hv69~2EGA0{^$1V6Z)n^e&;}6=b zGc5(O+JLbL!Rtdt1VGHg7dsPxdN7}|)FD2%aU;b@}(#Y^#n@bwbM z5|A5@Pd(R>>P_tQoQZHfm2k|d=@!-5lm<+83AG!f9I%IM9-8FhYu9#~eEi!>*Gj0d zscuAZaJYnm*&(C*IY0ox51E?AveC|kW;oqW`9US=%``m6;~pRu)#9FyvlnlX5@N~g z`-5q*Nw>QS51{@LBa3(om@_*7wxc4bLCP^=0x*{mDHaGZM#&U7MsxINRkE|FB2XrM z#bsPr6Eajx&9@JTL6A+r_Q{`f&Yx#WLWR>}grYS{I-Fxw+JV5qU?(kOj-H`Hjr;*9 zv#E+JfK7Onf71O*2J;Ig0(jpy+Wk>jwUs_`@^Z z!-xniPQ&A3C{S^NDxvA!$B~J^0)o4ctCAJ7#+=(gz+E7oEb}d2zI+jyNhFd8`17!^ zq|$_+HX~Aq`BQq{+1}oRnW|@V-bXb{4JdXimnZN#>OP?WMZe~S%8_;46&(o%JA!(L zMb^5znt2KZqsQm6-`7mq$TkHhm_dbz6g5<_(88?qA7so#DcQ10!N8%RhV(*|2qIES zVMrJZYC(XHQW}O}9{N!TnjPoZhV?80NuPAiSCw6x2puG(6!^1{&_v)AM~VQOoB}+$ z7b=gXq!~U_$l`U})8zv=Y$LLLFeiOt1%Luq9IhduUt%Ch6sbT?cGbfFk9s8Zdt<+Y z2M=-h0!d+~j zooEjQaU%A;d-c>MT-cbJHUazqXqnlxbrqSdQaf({LpNK}ICtjt|k!^|q`CT6!k3eSXh%t`e-=O_b^*2$DSJhQ-- zCl*!04Nr?ZOhAVx-|*4O!7+?VLrLr2-MjsajB?6u31L;)SS`BOIpofu&$KA<&ZXfM zq5t1Nw@2|Ti~Q^QBjk=HQaFn9LwI)6j_L+LUSeRie$iBLa*15iLz=nA-_Dy&@Usr@ zr01J!xS_JLQh;A#JWLE+u(GQpo754SthP1mKuHb!E;n6Kak|IJiwAA81ibvCv0J*7 zT&r3_>EeT~Bj(@U$}rBD3=!4>z?htn<4Jw;EbWq@UZZ+BRTUJ-x8p zTw_A6>9<*v`t3%@DO`O8I#y6nnLmLbhDn>2C@`}fk(fpZWOfk1|u z$O-(cNSb@o1(ED1H~at?|TBffMGqD6#gk*PV8tK#~pZfQ5ja>Nh9 zX_mDh_+LyR81X)!enykS+qNpk=3H~C;I80A?JZUVS$67eqS)QDk5;QK+e2Flymw>Z zE|=wa*eww*zX3zz0m%H~$NwujOr-hD@QA{BRPp6aj`gdPYTuTEfiw*&R7k((7s?hK zL%?Hl2=CP~<~j~?Cm)5H4Eurn}n-+26-!ouWz>)7nKn$f8v#!&*)JKO^e) zUdKRzD$ZW^w;P-;axaDOWKL?S_(hxYqTkKlID$XMecuF{uMMI+y}VJI@1nV9b~? z)o?-1hv3GUp8pG*ss+u9A;m!*sQ7lde#UH?XzUi3PDmiy_A@YW*4}jKzv&F4=goIu z!3oToo&h|`O=$$$fro=O{t|<~q)yMk+iA)Bgr*cqpIRWp^)b?_04a;d*PFA|eS@($ zYc!tMuzm6Z#j6vNX6^CKw5a&06P|i)MN%BKF7i_QyXu=d=`LHZF<@tJC<$A3uM#`oh>(@)vtz z+`pY;REga;@yqyk5ftfgD0jyAbeVc$$}mC})bL!Qu^1OxwCDz*R2*^l=GQ13J2uI* zCx*on|3DrG(*X#ehfhR#c)^^&UuuHN!RIBj{cb`4w=aBxf8$Q6-8x^mv*h0-`hijb za*x|{bDBsu7}G;uD!hZCBrx~~$Fc%23ar2DC}Faog(ZtM70m*={I86mY7F^JOx$(g z!1dn?-u>E~kp9sy{L&=Zc2S-daXWj?wDJo)>{a|==Kita`%^m1nSLqw`CH?WucJ!4 zUe}8LVZOGw+pl>|{q)COEwR%bJX&VsuvPm~UKkar%=J6Eb#tHj`h{f~b8KV%KktKT%HqT=#%`vse!MmXlytG-F}W)pjCLsc8dAmj z;78$yYLJ6PHDTt*XI%K)b5!pa(!Ewaf6_`)Hizek<~151v7# z+Im$xXIIxKB(^ugFO@)<4h25d)Xk!=2sxZ%^az7a?1 zBFf*eU|k3$iEXx-^&OiVXqR=ae@J;j2fNVi+m}yQJw}Xl+SpN|x;a36>-pO7QCZA1 zIvzB1qMQH5NcAvIqI)wE$ogTS@f((2& zRJGHJK2jV>B%i+V_pfhPO@GD1gEJQ6!`jr**LTC{{q#~*N%p3@!76j)MySy4)#rxZ zVu&#+Cgy5*+mW)4HmxM{LMZxH6xUY4H#ny6m6D_%O%AiY(Zkn}t^Nv}=^lK~8OqVG zU%qG=8SOjNDXZ%cLkWL+l!VP$w%4d?$(om`DNP?^e|tC=Xznuo`mF0J#;jJ$L`n<9!aPQV?wU(%^SzTGuF6=sH7|T8dCjFCCKd2sH>}M zIF@m7*41;efrdT%wtRO5&LE#&KX8FANa}GY>G<)nL$#?tgzJ@HeWB&~CD*TiVxP@tmQJztkuC}f)u(tl zm|H!ot?g?!1$FE%9>vMpK({_EzkB(;X{)uF9|U@HTf5dsrlAirbV4gKl&&KvB*aA_ zFW|A?E4}*PF#{$iaQ`lfaY4CStt5HbuV?w6_})oVB51ZPnJEM&o!o@s~J0X2STC& zyz7GSGiud+s>IKdRDSsM=?M1bPmZX2;UvhdXU0DZCXhraTjcN z(~IC*8l~Yw!edlu~}b1yfbi4OKh6T4IDm(!J^R>04+Q zoFbFR7hL2z#X4IQa#*IuEH~;jMPEy63@;~GMLA}hjJVnE!$3q}HGJq&{z2uzF`98# zd#xKwS?)V6h6_vIQ(>dD-Fo;Q?R=ghCllI(OUB5gfAM-pU=hCi+_7}ZalJo&Gpz=qCCSV}gN zezjR`wzKW(=e4zo{bScVQ z=i-DpbB_4T&ECO)KZ(-s)roT+t4zhC;LlLbAH}-HrTNk7SPdW0tP(ud@M=fkvhe=W zjZAqQ`qcyW?*ag58qT>FzP&uA_w6w1P#+3sj3ueLFAPQ0L2gbfHi%(+_%IVB*Qt}c zhsQz6;#D^W=kh&_CA%{lPCpg3e}5Wr_S`&U7F$NIjX#!xu8bu%uP&}1^yf93Ia_7~ zL&G`lGq7e)jpQUD@UNpZDWbFjEDcuvhzcT_pCq<0W%28%zm!;UYi0J+U*mFD{rAI- zABy|$H}DP}ptPOZ>JwG1u8GO?jo&I`IVP*F>Mi5duJ-n!3@V%fDG_O-<-xCeccEVd z7`W?+=@+K3)4#n)jDX}cMhxb|OHNBo&3Sr@Ij|IR%_E+rAJ*g3K74&ulrf0MSr+s< z;kvf`yZpAjUDvPAm22M~;6!Zbd16O$f;6B&^xnPmay|9VPXlpxTDmk7>yZSyvt#5r zRlncVK>wVaxfvB_DsarT%F5FZ79o?*U@YJN%hQk#pFi)>2xD%KPv7zU^`W~XmIe@$ zWymP%8X8i8w6z(gj2y~bB*oj%zc10!Fb+2czVF+ojE#DB---rqhzIQp`)i1e7#?wS zyzumkrG-UPg~PyxpzhPlC{>2hNyPX)HW0O*q;e zu@rEA4aqUMWsX!eT2?s0D?KA4=Ej^#QEu(qmwM^<&v$y$FPC>g>degIE_VJI+TA(l z@Gjb6Z*R{rKF{PwsL&;n(Ld4DIYf3;F^r?#o|~;^2LRA>wBL%vcMmhYLXv`YpG-R8 zR*z6Qm+6gENKvKV^0$mX?m&%^#Lw&D*WfgDP|Fodrj-6XQq2s#4#TFXEtz_Ec5>Rq z|70-^^`2+^d(qozB34F&c+PuXE$WR!R2il0(awq+twQ5@*%>yHC6SYU8E;S6n35~Zsr2<070Bi&9B zM{aG$5$?#<2-D#`RxbQq8JuXUdhX-o`lo8!ixSv&L%&br&2R=XmU2zR-38cf><7?_ zX!aTLb9ofSfnhEYY96N)4#E#xBt9a@Ncc7ITOry@J#xebG(ksO+vkHn(5xJ7U2JSD z5$NuVT2$O5^~Elm%l4E`KYDJMbep20B5{|1Jojerv0A_fY@r8SA)^kj|4?ibXop{~ z2ATGFakG){^3u0-l}-#??|#F?&4U*o*#i$24#)9x=%^t{deb+2^qch8R(>8}#W5$F8mSDnA` zU~-4B5%a%Xopai5{-mBCDw8!&3f*5CWg6b{@O zQGd_mXOqjP7v+|mGqFdbLoee=`1F;7o~(Sh=$u_7vi}73-*CUMT<_%U$B5HnK)&N4nm5r-@l*xwv+?#R*NJs+5m6LZR=NPewX{9da3ZorCJ}|{&boF{bFnvZG)ws!ie?TmFF@i4WOy+iD?7k zpLl-dG0quyWcmax>QDd%f4n@ye8=0{SFo0%4;?xoZeh-8QRPF>mw{;=dZ^j2pAI>g z@fCXq2N#mhVgJvQt!->x#{T}1g6C1s>CK~!qhez-#B*T}i829|5jffr5FM6i#-exq zd-W>0dGjuP+0ot`zU2RS7x8()%3&=@JFfi-x4fsa!&PortD1`$Mw%3HX`xT-qobq$ zIXySx@S2)!`s+%`0BQ89_G8E9zFk#XEmmk%s|5$DZB+7YR;290XD!cXmP#P`eev)j%d+iNnAGh4j zl2Pg1mtjj~e* zoO+m(Z8`P+!r^Pz^fW@WEmdTa>J5pH{YB(rAIiR z>*(nv_c+n?X!Vm+Ks##Af4~0?TGa-y6DLu{&5N(8nMD}rpki3Iv-aQX$rgjZ5sz6Q3o4N@|f}i`ay}bO; z*LTdz3`#HXd=A|l#{)F$-*T0QZf@?vN5o{yd+_n>&5 zZ!G@DW0maf)v9g4s`&br9;z?hb(wyXhxlDrlLtxt`&%0ZpYHMRV#sUycdZ1p6uaVs z|LvU(HD9OBzlBS;+S{R3+cN3yduzgav+6Wt^A_@*jaNGH)&KEB{t41_;9pyQo!R-5 zvr~_C*5+b|hKTp)qUOw{&FlE*RY?Yo{&P)|8GirwEArLRZl>_T4$$bSM?Qt zSt0%Z^V`LT|KE#HviI=&))>3+&>wf@64P&eX%*7;|M_c#X3Oj61_jo@pXxcf0|vOx zoT(OMb{ad{jRlMTeS9PCfZA$H{d1$h_gv*Ed6t0o`FXxll7IzqZE1Ixcx@qrXUqNH zuhwjP`EK!&2kG#+d4unj?1N_D(Bpyh@nT^Gn28lIMxyxly3E0pMO<(8{` zWzX={>P`QrwIRR7%}!D_yycAr1eWQvll&9>XLGh(oVc+6`DXK)=9TdNMygd;E3gb! z=5Su-7q)y{GHv^R9yY&r#pv@5Z$+CLkWc6F!mrnn^Iqa}ZRS2*-@oM>290W8 z&%yKjnKy8q?qXY8uWfQIUwHGpyyRhd`M=jbsBgcJq5tlZfWHPCC3XJu4wg(j_%uZM at7+)SVKZc&X`B?_YG*ypD&1nnrvC*qc_(=Q literal 0 HcmV?d00001