As of June 21, 2022: The UB-HPC cluster now consists of both academic and industry clusters. Academic users have access to the debug, general-compute, and scavenger partitions. Industry business customers have access to the industry partition.
Previously, an academic cluster user was not required to specify a partition and QOS. If not specified, the default set on the cluster would be assumed by the scheduler. This is no longer possible, now that the clusters are merged. All users must specify --partition and --qos (usually the same as partition). |
The UB-HPC compute cluster is broken up into several partitions (also known as "queues") that users
can request to have their jobs run on. Not all partitions are available to all users.
Partition Name | Default Wall Time | Wall Time Limit | Default Number CPUs | Job Submission Limit/user |
debug* | 1 hour | 1 hour | 1 | 4 |
general-compute* | 24 hours | 72 hours | 1 | 1000 |
industry** | 24 hours | 72 hours | 1 | 1000 |
scavenger* | 24 hours | 72 hours | 1 | 1000 |
viz*** | 24 hours | 24 hours | 1 | 1 |
* The debug, general-compute and scavenger partitions are for academic users only
** The industry partition is for business customers only
*** The viz partition is only accessible to users through OnDemand interactive desktop sessions.
Previously, CCR separated out hardware types using different partitions. This separation lowers the efficiency of the cluster and can cause users to have to wait longer for their jobs to run. If you need to request specific types of hardware for your jobs to run on, use Slurm features (labels include hardware features such as CPU, GPU, memory, and network).
Academic Cluster Hardware Specifications and Features
Specifying hardware requirements using Slurm features
Using the snodes command to see what's available