Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hotfix/quick updates #208

Merged
merged 2 commits into from
Aug 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 8 additions & 6 deletions docs/source/assets/data/partition_table.csv
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
Partition |br| Name,Max Job |br| Time,Max Jobs |br| per User,Max Cores |br| Running |br| per User,Max Running |br| Memory |br| Per User,Default |br| Coes |br| Per Job,Default |br| Memory |br| Per Core,Number |br| of Nodes
nodes,48 hrs,No Limit,960,5T,1,5.2G,116
nodes,48 hrs,No Limit,960,5T,1,5.2G,114
week,7 days,No Limit,576,3T,1,5.2G,12
month,30 days,No Limit,192,1T,1,5.2G,2
test,30 mins,2,8,16G,1,5.2G,116
preempt,30 days,No Limit,,,1,5.2G,116
gpu,3 days,No Limit,,,1,5.2G,16
test,30 mins,2,48,256G,1,5.2G,2
preempt,30 days,No Limit,,,1,5.2G,114
gpu,3 days,No Limit,,,1,5.2G,14
gpu_short,30 mins,3,,,1,5.2G,1
gpu_week,7 days,3,,,1,5.2G,14
gpu_interactive,2 hrs,1,,,1,5.2G,1
gpuplus,3 days,No Limit,,,1,5.2G,6
gpu_week,7 days,3,,,1,5.2G,16
himem,2 days,No Limit,,,1,20G,2
himem_week,7 days,No Limit,,,1,20G,1
interactive,8 hours,1,1,25G,1,10G,1
interactive,8 hours,1,8,25G,1,10G,1
28 changes: 23 additions & 5 deletions docs/source/using_viking/resource_partitions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,14 @@ gpu
- Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``)
- You are limited to **no more than six GPUs** at a time across all of your jobs running in the ``gpu`` partition

gpuplus
Partition for running jobs that require more GPU power, see documentation for details about how to request GPUs :ref:`request GPUs <gpu-jobs>`.
gpu_short
Partition for running short jobs on a GPU

- Each of the six nodes house two **nVidia H100 GPUs**
- Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``)
- You are limited to **no more than two GPUs** at a time across all of your jobs running in the ``gpuplus`` partition
- One dedicated node with three **nVidia A40 GPUs**
- Your job script must request **only one** GPU (eg ``#SBATCH --gres=gpu:1``) per job
- Practical limit of three jobs at any one time, as the dedicated node only has three GPUs
- Maximum memory per job is 170G
- Maximum cores per job is 32

gpu_week
Partition for running GPU jobs on any of the **nVidia A40 nodes** for up to a week
Expand All @@ -65,6 +67,22 @@ gpu_week
- Your job script should request **only** one GPU (eg ``#SBATCH --gres=gpu:1``)
- The ``gpu_week`` partition is limited to running **a maximum of three GPUs** at any time, across all users

gpu_interactive
Partition for running interactive jobs with a GPU

- One dedicated node with three **nVidia A40 GPUs**
- Your job script must request **only one** GPU (eg ``#SBATCH --gres=gpu:1``)
- Only **one job per user** on this partition
- Maximum memory per job is 170G
- Maximum cores per job is 32

gpuplus
Partition for running jobs that require more GPU power, see documentation for details about how to request GPUs :ref:`request GPUs <gpu-jobs>`.

- Each of the six nodes house two **nVidia H100 GPUs**
- Your job script must request at least one GPU (eg ``#SBATCH --gres=gpu:1``)
- You are limited to **no more than two GPUs** at a time across all of your jobs running in the ``gpuplus`` partition

himem
For running jobs that require memory greater than that available in other partitions. Each of the two nodes (himem01 and himem02) have 96 cores. The max running cores and max running memory limits are practical limits, due to the resources available on the nodes.

Expand Down
Loading