diff --git a/docs/source/assets/data/partition_table.csv b/docs/source/assets/data/partition_table.csv index 79f80ae..1fb3308 100644 --- a/docs/source/assets/data/partition_table.csv +++ b/docs/source/assets/data/partition_table.csv @@ -1,12 +1,14 @@ Partition |br| Name,Max Job |br| Time,Max Jobs |br| per User,Max Cores |br| Running |br| per User,Max Running |br| Memory |br| Per User,Default |br| Coes |br| Per Job,Default |br| Memory |br| Per Core,Number |br| of Nodes -nodes,48 hrs,No Limit,960,5T,1,5.2G,116 +nodes,48 hrs,No Limit,960,5T,1,5.2G,114 week,7 days,No Limit,576,3T,1,5.2G,12 month,30 days,No Limit,192,1T,1,5.2G,2 -test,30 mins,2,8,16G,1,5.2G,116 -preempt,30 days,No Limit,,,1,5.2G,116 -gpu,3 days,No Limit,,,1,5.2G,16 +test,30 mins,2,48,256G,1,5.2G,2 +preempt,30 days,No Limit,,,1,5.2G,114 +gpu,3 days,No Limit,,,1,5.2G,14 +gpu_short,30 mins,3,,,1,5.2G,1 +gpu_week,7 days,3,,,1,5.2G,14 +gpu_interactive,2 hrs,1,,,1,5.2G,1 gpuplus,3 days,No Limit,,,1,5.2G,6 -gpu_week,7 days,3,,,1,5.2G,16 himem,2 days,No Limit,,,1,20G,2 himem_week,7 days,No Limit,,,1,20G,1 -interactive,8 hours,1,1,25G,1,10G,1 +interactive,8 hours,1,8,25G,1,10G,1 diff --git a/docs/source/using_viking/resource_partitions.rst b/docs/source/using_viking/resource_partitions.rst index b99bcf5..d1b6d2e 100644 --- a/docs/source/using_viking/resource_partitions.rst +++ b/docs/source/using_viking/resource_partitions.rst @@ -51,12 +51,14 @@ gpu - Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``) - You are limited to **no more than six GPUs** at a time across all of your jobs running in the ``gpu`` partition -gpuplus - Partition for running jobs that require more GPU power, see documentation for details about how to request GPUs :ref:`request GPUs `. +gpu_short + Partition for running short jobs on a GPU - - Each of the six nodes house two **nVidia H100 GPUs** - - Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``) - - You are limited to **no more than two GPUs** at a time across all of your jobs running in the ``gpuplus`` partition + - One dedicated node with three **nVidia A40 GPUs** + - Your job script must request **only one** GPU (eg ``#SBATCH --gres=gpu:1``) per job + - Practical limit of three jobs at any one time, as the dedicated node only has three GPUs + - Maximum memory per job is 170G + - Maximum cores per job is 32 gpu_week Partition for running GPU jobs on any of the **nVidia A40 nodes** for up to a week @@ -65,6 +67,22 @@ gpu_week - Your job script should request **only** one GPU (eg ``#SBATCH --gres=gpu:1``) - The ``gpu_week`` partition is limited to running **a maximum of three GPUs** at any time, across all users +gpu_interactive + Partition for running interactive jobs with a GPU + + - One dedicated node with three **nVidia A40 GPUs** + - Your job script must request **only one** GPU (eg ``#SBATCH --gres=gpu:1``) + - Only **one job per user** on this partition + - Maximum memory per job is 170G + - Maximum cores per job is 32 + +gpuplus + Partition for running jobs that require more GPU power, see documentation for details about how to request GPUs :ref:`request GPUs `. + + - Each of the six nodes house two **nVidia H100 GPUs** + - Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``) + - You are limited to **no more than two GPUs** at a time across all of your jobs running in the ``gpuplus`` partition + himem For running jobs that require memory greater than that available in other partitions. Each of the two nodes (himem01 and himem02) have 96 cores. The max running cores and max running memory limits are practical limits, due to the resources available on the nodes.