Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Action

To schedule research computing work using SLURM, follow the instructions below.

...

All jobs on the general purpose cluster request resources via SLURM. SLURM, is open source software that allocates resources to users for their computations, provides a framework for starting, executing and monitoring compute jobs, and arbitrates contention for resources by managing a queue of pending work. SLURM is widely used in the high performance computing (HPC) landscape and it is likely you will encounter it outside of our systems. For more information please see https://slurm.schedmd.com/ 

General Purpose Computing

...

Info

batch has some important restrictions. A job can only request 3 nodes and will run for 14 days before being automatically terminated. If you need an exception to this rule, please contact askIT@albany.edu

Request access to more nodes, or a longer time limit

On a case by case basis, ITS will grant users temporary access to more than the default job limitations. Please contact askIT@albany.edu if you would like to request access to more nodes, or a longer time limit.

...

To spawn a terminal session on a cluster node, with X11 forwarding, runnode run:

Code Block
languagebash
srun --partition=batch --nodes=1 --time=01:00:00 --cpus-per-task=4 --mem=400 --x11 --pty $SHELL -i

This will spawn a 01:00:00 hour session, with 4 CPUs and 400mb of RAM. To spawn the same terminal, without X11 forwarding:

Code Block
languagebash
srun --partition=batch --nodes=1 --time=01:00:00 --cpus-per-task=4 --2 --mem=400 --pty $SHELL -i

View the resources used by a completed job

...

Info

This job ran on rhea-09, and it's max memory size was ~52 GB. That that I requested 60000MB, so I could refine this job to request slightly less memory. It ran for 14:50:14 and used about 350 CPU hours.

Restrict a job to a certain CPU architecture

 Use the --constraint flag in #SBATCH. To few available architecture on individual nodes use scontrol show node

...

Code Block
languagebash
srun --partition=batch --nodes=2 --constraint=mpi_ib --time=01:00:00 --cpus-per-task=4 --mem=400 --x11 --pty $SHELL -i 

OR

Code Block
languagebash
#SBATCH --constraint=mpi_ib

...