Instructions for deploying a GPU cluster with Slurm
- Control system to run the install process
- One server to act as the Slurm controller/login node
- One or more servers to act as the Slurm compute nodes
-
Install a supported operating system on all nodes.
Install a supported operating system on all servers via a 3rd-party solution (i.e. MAAS, Foreman) or utilize the provided OS install container.
-
Set up your provisioning machine.
This will install Ansible and other software on the provisioning machine which will be used to deploy all other software to the cluster. For more information on Ansible and why we use it, consult the Ansible Guide.
# Install software prerequisites and copy default configuration ./scripts/setup.sh
-
Create and edit the Ansible inventory.
Ansible uses an inventory which outlines the servers in your cluster. The setup script from the previous step will copy an example inventory configuration to the
config
directory.Edit the inventory:
# Edit inventory # Add Slurm controller/login host to `slurm-master` group # Add Slurm worker/compute hosts to the `slurm-node` groups vi config/inventory # (optional) Modify `config/group_vars/*.yml` to set configuration parameters
Note: Multiple hosts can be added to the
slurm-master
group for high-availability. You must also setslurm_enable_ha: true
inconfig/group_vars/slurm-cluster.yml
. For more information about HA Slurm deployments, see: https://slurm.schedmd.com/quickstart_admin.html#HA -
Verify the configuration.
ansible all -m raw -a "hostname"
-
Install Slurm.
# NOTE: If SSH requires a password, add: `-k` # NOTE: If sudo on remote machine requires a password, add: `-K` # NOTE: If SSH user is different than current user, add: `-u ubuntu` ansible-playbook -l slurm-cluster playbooks/slurm-cluster.yml
-
Verify Pyxis and Enroot can run GPU jobs across all nodes.
# NOTE: This will use Pyxis to download a container and verify GPU functionality across all compute nodes
ansible-playbook -l slurm-cluster playbooks/slurm-cluster/slurm-validation.yml -e '{num_gpus: 1}'
Now that Slurm is installed, try a "Hello World" example using MPI.
Read through the slurm usage guide and Open OnDemand guide for more information.
The default Slurm deployment includes a collection of prolog and epilog scripts that should be modified to suit a particular system. For more information, see the prolog/epilog documentation.
The default Slurm deployment includes setting up Node Health Check. This tool will run periodically on idle nodes to validate that the hardware and software is set up as expected. Nodes which fail this check will be automatically drained in Slurm to prevent jobs running on potentially broken nodes.
However, the default configuration that is generated by DeepOps is very basic, only checking that CPU, memory, and GPUs are present
and that a few essential services are running.
To customize this file, you can set the nhc_config_template
variable to point to your custom file.
The NHC docs go into detail about the configuration language.
If you want to disable NHC completely, you can do so by setting slurm_install_nhc: no
and un-defining the slurm_health_check_program
variable.
As part of the Slurm installation, Grafana and Prometheus are both deployed.
The services can be reached from the following addresses:
- Grafana: http://<slurm-master>:3000
- Prometheus: http://<slurm-master>:9090
To enable syslog forwarding from the cluster nodes to the first Slurm controller node, you can set the following variables in your DeepOps configuration:
slurm_enable_rsyslog_server: true
slurm_enable_rsyslog_client: true
For more information about our syslog forwarding functionality, please see the centralized syslog guide.
For information about configuring a shared NFS filesystem on your Slurm cluster, see the documentation on Slurm and NFS.
You may optionally choose to install a tool for managing additional packages on your Slurm cluster. See the documentation on software modules for information on how to set this up.
Open OnDemand can be installed by setting the install_open_ondemand
variable to yes before running the slurm-cluster.yml
playbook.
Pyxis and Enroot are installed by default and can be disabled by setting slurm_install_enroot
and slurm_install_pyxis
to no. Singularity can be installed by setting the slurm_cluster_install_singularity
variable to yes before running the slurm-cluster.yml
playbook.
To minimize the requirements for the cluster management services, DeepOps deploys a single Slurm head node for cluster management, shared filesystems, and user login. However, for larger deployments, it often makes sense to run these functions on multiple separate machines. For instructions on separating these functions, see the large deployment guide.