As of June 21, 2022:  The UB-HPC cluster now consists of both academic and industry clusters.  Academic users have access to the debug, general-compute, and scavenger partitions.  Industry business customers have access to the industry partition.

Previously, an academic cluster user was not required to specify a partition and QOS.  If not specified, the default set on the cluster would be assumed by the scheduler.  This is no longer possible, now that the clusters are merged.  All users must specify --partition and --qos (usually the same as partition).

The UB-HPC compute cluster is broken up into several partitions (also known as "queues") that users
can request to have their jobs run on. Not all partitions are available to all users.

Partition NameDefault Wall TimeWall Time LimitDefault Number CPUsJob Submission Limit/user
debug*1 hour1 hour14
general-compute*24 hours72 hours11000
industry**24 hours72 hours11000
scavenger*24 hours72 hours11000
viz***24 hours24 hours11

* The debug, general-compute and scavenger partitions are for academic users only
** The industry partition is for business customers only
*** The viz partition is only accessible to users through OnDemand interactive desktop sessions.

Previously, CCR separated out hardware types using different partitions.  This separation lowers the efficiency of the cluster and can cause users to have to wait longer for their jobs to run.  If you need to request specific types of hardware for your jobs to run on, use Slurm features (labels include hardware features such as CPU, GPU, memory, and network).

Academic Cluster Hardware Specifications and Features

Specifying hardware requirements using Slurm features

Using the snodes command to see what's available