Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Action

To schedule research computing work using SLURM, follow the instructions below.

...

All jobs on the general purpose cluster request resources via SLURM. SLURM, is open source software that allocates resources to users for their computations, provides a framework for starting, executing and monitoring compute jobs, and arbitrates contention for resources by managing a queue of pending work. SLURM is widely used in the high performance computing (HPC) landscape and it is likely you will encounter it outside of our systems. For more information please see https://slurm.schedmd.com/ 

General Purpose Computing

...

Info
The batch partition is comprised of 2080 CPUs 1040 CPU cores (2080 threads) and 31 compute nodes. Note that a job can only request 3 nodes and may only be active for 14 days. If you need an exception to this, please contact askIT@albany.edu

...

sinfo is commonly used to few the status of a give given cluster or node, or how many resources are available to schedule. 

...

Info
Note that %a reports CPUS as allocated/idle/other/available. In this example, uagc20-05 10 has all of it's cores threads allocated (32 64 out of 3264), and is showing a CPU load of 3264.10 30 (or that 3264.10 cores 30 threads are active). Whereas, many of the other nodes have lower utilization. We can use this information to make smart decisions about how many resources we request. 

...

Info

batch has some important restrictions. A job can only request 3 nodes and will run for 14 days before being automatically terminated. If you need an exception to this rule, please contact askIT@albany.edu

Request access to more nodes, or a longer time limit

 On a case by case basis, ARCC will grant users temporary access to more than the default job limitations. Please contact askIT@albany.edu if you would like to request access to more nodes, or a longer time limit.

...

Info

This job ran on rhea-09, and it's max memory size was ~52 GB. That that I requested 60000MB, so I could refine this job to request slightly less memory. It ran for 14:50:14 and used about 350 CPU hours.

Restrict a job to a certain CPU architecture

 Use the --constraint flag in #SBATCH. To few available architecture on individual nodes use scontrol show node

...