Skip to content

Lab 01: Introduction to ISAAC‐NG

Ryan edited this page Jan 12, 2024 · 7 revisions

Introduction

ISAAC-NG is a high performance computing cluster which carries a learning curve for new users. For the purposes of this workshop, we won't have time to cover the numerous features of ISSAC-NG, but OIT has more documentation if you're interested.

Log in

Assuming you've successfully created a Linux account and have access to a project directory for this course, you may log in to ISAAC-NG using:

ssh <user>@login.isaac.utk.edu

You'll need to enter your UTK CAS password, then choose the preferred 2FA login method to continue.

You are now in the home directory on a login node. Notice how the text to the left of the shell prompt displays your username followed by "@login"? While in the login node, you can use most of the familiar command line tools, but this is not the appropriate environment to do anything computationally demanding. In order to run actual tools for analysis, we'll need to request resources which are being shared with many other users. The resource manager, slurm, has many different associated commands for requesting nodes, RAM, and CPUs. For this workshop, we'll be requesting a small amount of resources that we can actively interact with using the salloc command.

Setting up the project directory

Attendees of this workshop have a dedicated project directory on ISAAC-NG.

cd /lustre/isaac/proj/UTK0262/

After changing to UTK0262, go ahead and make a project directory with your username.

mkdir <your username goes here>

Then clone the NF_workshop repository.

git clone https://github.com/ryandkuster/NF_workshop.git

Request interactive computing resources using salloc

The salloc command, like many slurm commands, requires users to request a project account, partition, qos, nodes, ntasks, and time. Here's an example request for a node with 2 maximum processes running at a given time.

salloc --account isaac-utk0262 \
  --partition=short \
  --qos=short \
  --nodes=1 \
  --ntasks=4 \
  --time=0-03:00:00

After running the command, you should see a message similar to the following:

salloc: Pending job allocation 1140170
salloc: job 1140170 queued and waiting for resources
salloc: job 1140170 has been allocated resources
salloc: Granted job allocation 1140170
salloc: Waiting for resource configuration
salloc: Nodes ilm0837 are ready for job

The final line shows node ilm0837 has been set aside for use, and it can now be accessed with ssh. This node name will be different for each user depending on which qos, partition, and available resources are present at the time of running the command.

ssh <node>

For example, the node ilm0837 would be used in this instance. The text to the left of your shell prompt should now display "<user>@<node>", which means you're no longer on the login node. To escape the node, simply type exit.

If you ever want to check which node was allocated, or see your time utilized, run the following from any node:

squeue -u <user>

Using module to load Nextflow

To see available modules:

module avail

This displays all currently installed packages, and we can see nextflow/20.04.1-gcc(default) and nextflow/21.04.3-gcc are possibly packages we can load. Let's load the newest version:

module load nextflow/21.04.3-gcc

Then confirm it is available in your environment by running:

nextflow -version

By default, Singularity is already available on ISAAC-NG. To see the version run:

singularity --version

Cancelling requested jobs

Congratulations! You've accessed ISAAC-NG HPSC via ssh, requested an interactive session with salloc and ssh, searched and loaded packages using module and should be ready to perform most of the exercises in the labs. Remember, to exit your interactive session, simply type exit. Depending on the time requested for a resource, if you are truly finished and have exited the node, it is good practice to stop the resource from being needlessly used by running:

scancel <jobid>

The job id in this example would be 1140170, and you can always check by running squeue -u .

Copying data from ISAAC-NG

Data from ISAAC-NG can be copied using secure copy, scp. Rather than using the login node (login.isaac.utk.edu), we'll use one of the dedicated data transfer nodes.

scp <user>@dtn1.isaac.utk.edu:/lustre/isaac/proj/UTK0262/<remaining path to data you want to copy>

⚠️ If you are using scp on zsh (macOS), the portion of the scp command beginning with you username through the end of the path must be wrapped in quotes.