As of June 30, 2020:


The academic (UB-HPC) compute cluster is broken up into several partitions (also known as "queues") that users can request to have their jobs run on.

Partition NameDefault Wall TimeWall Time LimitDefault Number CPUsJob Submission Limit/user
debug1 hour1 hour14
general-compute*24 hours72 hours11000
viz**24 hours24 hours11

NOTES:

* The general-compute partition is the default and does not need to be specified. 

** The viz partition is only accessible to users through OnDemand interactive desktop sessions.


Previously, CCR separated out hardware types using different partitions.  This separation lowers the efficiency of the cluster and can cause users to have to wait longer for their jobs to run.  If you need to request specific types of hardware for your jobs to run on, use Slurm features (labels include hardware features such as CPU, GPU, memory, and network).

Academic Cluster Hardware Specifications and Features

Specifying hardware requirements using Slurm features

Using the snodes command to see what's available