Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Update computing section #73

Merged
merged 17 commits into from
Jun 12, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.DS_Store
.token
/.quarto/
config.toml
Expand Down
9 changes: 8 additions & 1 deletion _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,14 @@ website:
- labguide/environment/working_remotely.md
- labguide/environment/survey.md
- labguide/general_lab_policies.md
- labguide/computing/computing.md
- section: "Computing"
contents:
- labguide/computing/etiquette.md
- section: "Sherlock"
contents:
- labguide/computing/sherlock/access-and-resources.md
- labguide/computing/sherlock/job-submission.md
- labguide/computing/sherlock/data-management.md
- section: "Research practices"
contents:
- labguide/research/human_subjects.md
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ title: "Computing"
format:
html:
page-layout: full
number-sections: true
---

## Computer security
Expand Down Expand Up @@ -50,12 +51,12 @@ environment properly:
- If you encounter an error in the documentation, then flag it and/or fix it
- You can refer to [this website](https://www.sherlock.stanford.edu/docs/overview/introduction/)
for more information on Sherlock, and if you have any further
questions you can contact the Sherlock admis via email/Slack or meet
questions you can contact the Sherlock admins via email/Slack or meet
during specific Office Hours as specified[here](https://www.sherlock.stanford.edu/docs/overview/introduction/#office-hours).
- Each TACC system has its own
dedicated user guide that provides substantial information about how
to use the system.
- Users of shared systems should
clean up directories as projects are completed, removing intermediate
files that are not necessary to keep. The storage of redundant data
should be avoided, except where required for backups.
should be avoided, except where required for backups.
81 changes: 81 additions & 0 deletions labguide/computing/sherlock/access-and-resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
## Access and resources

This section describes getting initial access to Sherlock and monitoring available resources.

### Compute basics

#### Acquiring an account and logging in

If you are a new member of a lab at stanford, you will need to have your PI email Sherlock's support to get your SUNet account configured for use with computing resources.
See the [Sherlock getting started guide](https://www.sherlock.stanford.edu/docs/getting-started/#prerequisites) for details.

Once you have an account set up with your SUNet ID `<username>`, you can access Sherlock via any SSH client client.
If you are using a UNIX-like system (e.g., MacOS) and you are using terminal to connect to sherlock, a useful resource is to set up an ssh config file.
You can do this by editing or creating the file `~/.ssh/config`, and adding the following lines:

```{.default filename="~/.ssh/config"}
Host sherlock
HostName login.sherlock.stanford.edu
User <username>
KexAlgorithms +diffie-hellman-group-exchange-sha1
```

Navigating to terminal, you can login to Sherlock using:

```bash
$ ssh sherlock
```

and then follow the remainder of the instructions [Sherlock connection guide](https://www.sherlock.stanford.edu/docs/getting-started/connecting/#credentials).

#### Storage Monitoring

The Stanford filesystems have fixed allocations for individuals and groups.
As such, it will be useful for you to be able to determine how much space you/the group have, so that you can optimally manage your resources.
For extended details on storage with Sherlock, check out [Sherlock storage guide](https://www.sherlock.stanford.edu/docs/storage/#quotas-and-limits).

There are several commands that we find extremely useful for working on Sherlock.
We will go over several of them.

Sherlock has fixed allocations for the storage of individuals and groups.
As such, you will be required to properly manage your storage allocations, re-allocating data to group-level directories as necessary.

To check your quotas for your group `<groupname>`, you can use the `sh_quota` command:

```default
$ sh_quota
+---------------------------------------------------------------------------+
| Disk usage for user <username> (group: <groupname>) |
+---------------------------------------------------------------------------+
| Filesystem | volume / limit | inodes / limit |
+---------------------------------------------------------------------------+
HOME | 9.4GB / 15.0GB [|||||| 62%] | - / - ( -%)
GROUP_HOME | 562.6GB / 1.0TB [||||| 56%] | - / - ( -%)
SCRATCH | 65.0GB / 100.0TB [ 0%] | 143.8K / 20.0M ( 0%)
GROUP_SCRATCH | 172.2GB / 100.0TB [ 0%] | 53.4K / 20.0M ( 0%)
OAK | 30.8TB / 240.0TB [| 12%] | 6.6M / 36.0M ( 18%)
+---------------------------------------------------------------------------+
```

`sh_quota` is a Sherlock-specific command that provides a general overview for all partitions a user has access to.
[Documentation is provided on their wiki](https://www.sherlock.stanford.edu/docs/storage/?h=sh_quota#checking-quotas).
When your home directory begins to get filled, it may be valuable to consider moving files to scratch directories, or group directories.
`HOME`, `GROUP_HOME`, and `OAK` are persistent storage; `*SCRATCH` directories are subject to purging.

Another useful tool is the disk usage command `du`.
A useful and more interacrtive version of this command is `ncdu`.
To use `ncdu`, add the following line to the bottom of your `~/.bash_profile`, which will load the `ncdu` module each time you log in to Sherlock:

```bash
$ ml system ncdu
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To use `ncdu`, add the following line to the bottom of your `~/.bash_profile`, which will load the `ncdu` module each time you log in to Sherlock:
```bash
$ ml system ncdu
```
To use `ncdu`, add `ml system ncdu` to the bottom of your `~/.bash_profile`, which will load the `ncdu` module each time you log in to Sherlock.
You can add and access it with the following commands:
````bash
$ echo "ml system ncdu" >> ~/.bash_profile
$ source ~/.bash_profile


In future login session, you can access the `ncdu` command via
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In future login session, you can access the `ncdu` command via
In future login sessions, `ncdu` will be accessible without calling `source`.
Use the `ncdu` command via


```bash
$ ncdu <folder>
```

which will launch an interactive window for monitoring directory sizes from the folder specified by `<folder>`.
Sherlock [recommends running it in an interactive job](https://www.sherlock.stanford.edu/docs/storage/?h=ncdu#locating-large-directories).
This can be useful when identifying where quota usage is being allocated.
46 changes: 46 additions & 0 deletions labguide/computing/sherlock/data-management.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
## Data management on Sherlock

This section describes how the lab manages datasets on Sherlock, including setting permissions (i.e., who else in the lab can access the dataset).

Datasets that are considered to be common lab assets (which includes any new studies within the lab and any openly shared datasets) should be placed into the primary data directory on the relevant filesystem.
Datasets that are in process of acquisition should go into the “inprocess” directory.
Once the dataset is finalized, it should be moved into the “data” directory.

Once a dataset has been installed in the data directory, it should be changed to be read-only for owner and group, using the following commands:

```bash
find <directory name> -type d -exec chmod 550
find <directory name> -type f -exec chmod 440
```

Datasets that are temporary, or files generated for analyses that are not intended to be reused or shared, should be placed within the user directory.

#### Restricting access

Some data resources cannot be shared across the lab and instead need to be restricted to lab members with Data Usage Agreement (DUA) access.
The following can be adapted to restrict ACLs (access control list) to only the appropriate subset of lab members:

```{.bash filename="protect_access.sh"}
#!/bin/bash

echo "Using ACLs to restrict folder access on oak for russpold folders"
echo -e "\t https://www.sherlock.stanford.edu/docs/storage/data-sharing/#posix-acls "
sleep 1
echo
# get user input for directory + user
read -p "Enter the folder path: " dir_name
if [ ! -d "$dir_name" ]; then
echo "Error: ${dir_name} doesn't exist"
exit 1
fi

read -p "Enter the username: " user_name

# set restrictions
echo -e "Setting restrictions for ${user_name} as rxw for folder: /n ${dir_name}"
setfacl -R -m u:$user_name:rwx $dir_name
setfacl -R -d -m u:$user_name:rwx $dir_name

# rm default permissions for the group -- oak_russpold
setfacl -m d::group:oak_russpold:--- $dir_name
```
166 changes: 166 additions & 0 deletions labguide/computing/sherlock/job-submission.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
## Job submission on Sherlock

::: {.callout-important}
Login nodes are shared among many users and therefore must not be used to run computationally intensive tasks.
Those should be submitted to the scheduler which will dispatch them on compute nodes.
:::

### Determining where to submit your jobs

You can check available resources with the `sh_part` command:

```default
$ sh_part
QUEUE STA FREE TOTAL FREE TOTAL RESORC OTHER MAXJOBTIME CORES NODE GRES
PARTITION TUS CORES CORES NODES NODES PENDNG PENDNG DAY-HR:MN /NODE MEM-GB (COUNT)
normal * 153 1792 0 84 23k 127 7-00:00 20-24 128-191 -
bigmem 29 88 0 2 0 8 1-00:00 32-56 512-3072 -
dev 31 40 0 2 0 0 0-02:00 20 128 -
gpu 47 172 0 8 116 1 7-00:00 20-24 191-256 gpu:4(S:0-1)(2),gpu:4(S:0)(6)
```
This will list out the compute resources available to you, so that you can determine the optimal resource to use.


### Running interactive jobs with `sh_dev`

`sh_dev` sessions run on dedicated compute nodes, ensuring minimal wait times when you need to access a node for testing script, debug code or any kind of interactive work.
`sh_dev` also provides X11 forwarding via the submission host (typically the login node you're connected to) and can thus be used to run GUI applications.

Users can [specify `sh_dev` calls](https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/#compute-nodes) with specific memory requests.

#### Interactive jobs from existing scripts: `salloc`

If you prefer to submit an existing job script or other executable as an interactive job, you can use the `salloc` command:

```bash
salloc script.sh
```

`salloc` will start a Slurm job and allocate resources for it, but it will not automatically connect you to the allocated node(s).
It will only start a new shell on the same node you launched salloc from, and set up the appropriate `$SLURM_*` environment variables.

### Submitting to the scheduler

Most large, long-running jobs on Sherlock should be submitted via the job scheduler.

Most jobs can be submitted the the scheduler using the `sbatch` command.
Sherlock provides [documentation](https://www.sherlock.stanford.edu/docs/getting-started/submitting/#batch-scripts) for how to generate `sbatch` submission scripts.
There is also an [experimental slurm-o-matic](http://slurm-o-matic.stanford.edu/) tool to help in generating these scripts interactively.

#### Best practice in submitting jobs

Best practice before launching large, long-running jobs on Sherlock is to run a short test job to evaluate the time memory requirements.
The basic idea is to run a small test job with minimal resource requirements---so the job will run quickly---then re-queue the job with optimized resource requests.

Jobs can be evaluated along three dimensions: memory, parallelization (i.e., number of nodes and CPUs), and run time.
We briefly highlight why each axis is important below, as well as how to evaluate its requirements.

- **Memory:** Evaluating memory requirements of completed jobs is straightforward tools such as `sacct` (see @sec-acct).
Requesting excessive memory and not using it will count against your [FairShare score](https://www.sherlock.stanford.edu/docs/glossary/#fairshare).

- **Parallelization:** Evaluating how well the job performance scales with added CPUs can be done using `seff` (see @sec-seff).
Requesting CPUs then not using them will still count against your [FairShare score](https://www.sherlock.stanford.edu/docs/glossary/#fairshare).

- **Run Time:** Requesting excess time for your jobs will _not_ count against your [FairShare score](https://www.sherlock.stanford.edu/docs/glossary/#fairshare), but it will affect how quickly the scheduler allocates resources to your job.

Below, we provide more detailed resources (and example commands) for monitoring jobs.

#### Monitoring running jobs

##### Using `squeue`

Users can monitor the status of their currently running jobs on Sherlock with `squeue`:

```bash
squeue -u $USER
```

This command will monitor how long a job has been sitting on queue or actively running, depending on status.

##### Using `clush`

More detailed information on running jobs can be found using [`clush`](https://www.sherlock.stanford.edu/docs/software/using/clustershell/), which can be loaded via the module system:

```bash
ml system py-clustershell
clush -N -w @job:<your Job ID> top -b -n 1
```

On Sherlock, it provides an easy way to run commands on nodes your jobs are running on, and collect back information.
In the above example, we use `clush` to execute the `top` command, which provides real-time monitoring of job resource usage.
Many other uses are possible; for example,

```bash
clush -w @j:<your Job ID> ps -u$USER -o%cpu,rss,cmd
```

Will return the CPU and memory usage of all processes for a given job ID.

#### Example of fMRIPrep job submissions

One application commonly run within Poldracklab is [`fMRIPrep`](https://fmriprep.org/en/stable/).
Since lab members will often submit many fMRIPrep jobs at the same time, it is best practice to submit these as an [array job](https://rcpedia.stanford.edu/training/10_job_array.html).

We provide an example fMRIPrep array job script below for running 5 individual subjects.

```{.bash filename="submit_fmriprep_array.sh"}
#!/bin/bash
#
#SBATCH --job-name=fMRIPrep
#SBATCH --output=fmriprep.%j.out
#SBATCH --time=1-00:00
#SBATCH --cpus-per-task=16
#SBATCH --mem-per-cpu=8GB
#SBATCH --mail-user=<your-email>@stanford.edu
#SBATCH --mail-type=FAIL
#SBATCH --array=1-5
#SBATCH -p russpold,owners,normal


# Define directories

DATADIR=</your/project/directory>
SCRATCH=</your/scratch/directory>
SIMGDIR=</your/project/directory/simgs>

# Begin work section
subj_list=(`find $DATADIR -maxdepth 1 -type d -name 'sub-*' -printf '%f\n' | sort -n -ts -k2.1`)
sub="${subj_list[$SLURM_ARRAY_TASK_ID]}"
echo "SUBJECT_ID: " $sub

singularity run --cleanenv -B ${DATADIR}:/data:ro \
-B ${SCRATCH}:/out \
-B ${DATADIR}/license.txt:/license/license.txt:ro \
${SIMGDIR}/fmriprep-23-2-0.simg \
/data /out participant \
--participant-label ${sub} \
--output-space MNI152NLin2009cAsym:res-2 \
-w /out/workdir \
--notrack \
--fs-license-file /license/license.txt
```

### Evaluating completed job resources

Regardless of whether your job was run in an interactive (i.e., using `sh_dev`) or non-interactive (i.e., using `sbatch` session), it is useful to evaluate how many resources they consumed after running:

#### Using `seff` {#sec-seff}

Nominally, the fastest and easiest way to get a summary report, for a given job, is the “SLURM efficiency” tool, `seff`.
This tool returns a simple, human-readable format report that includes both allocated as well as actually used resources (nodes, CPUs, memory, wall time).

```bash
seff <your Job ID>
```

Generally speaking, `seff` reports can be used to determine how well (if at all) a job parallelizes, how much memory to request for future implementations of the job, and how much time to request.
More granular reporting, however, is possible using the `sacct` command.

#### Using `sacct` {#sec-sacct}

More rigorous resource analysis can be performed after a job has completed by using SLURM accounting, or `sacct`.
Again, SLURM provides a rigorous documentation, including using `--format=` to define which columns to output and the various options that can constrain a query.

```bash
sacct --user=$USER --start=2023-09-01 --end=2023-09-03 --format=jobid,jobname,partition,account,nnodes,ncpus,reqmem,maxrss,elapsed,totalcpu,state,reason
```
Loading