The UB-HPC cluster consists of a heterogeneous set of nodes purchased over the years as grants were secured and funds became available.  As of June 2022, it also includes the nodes from the industry cluster.  We provide a list below of the types of nodes and rough quantity numbers.  Keep in mind the older hardware is removed from service as it becomes impossible to repair so the quantities listed here may not be accurate.  

NOTE: Not all nodes are available in all partitions or to all users.  You can get an exact accounting of what is currently available in the UB-HPC cluster by running the 'snodes' command on the command line.  More info on snodes

Type of Node


Cores /Node Clock RateRAMHigh Speed Network*Slurm FeaturesLocal /scratchCPU/GPU Details
Compute (Dell)67
562.0GHz512GBInfiniband (HDR)IB CPU-Gold-6330 INTEL875GBIntel Xeon Gold 6330 (2/node)
Compute (Dell)96402.10GHz192GBInfniband (M)IB CPU-Gold-6230 INTEL NIH835GBIntel Xeon Gold 6230 (2/node)
Compute (Dell)86322.10GHz192GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL MRI827GBIntel Xeon Gold 6130 (2/node)
Compute (Dell)34162.20GHz128GBInfiniband (M)IB CPU-E5-2660 INTEL7TBIntel Xeon E5-2660 (2/node)
Compute (Dell)372122.40GHz48GB



IB CPU-E5645 INTEL884GBIntel Xeon E5645 (2/node)
Compute (Dell)12882.13GHz24GBNoneCPU-L5630 INTEL268GBIntel Xeon L5630 (2/node)
High Memory Compute16562.0GHz1TBInfiniband (HDR)IB CPU-Gold-6330 INTEL
750GBIntel Xeon Gold 6330 (2/node)
High Memory Compute 24402.10GHz768GB



IB CPU-Gold-6230 INTEL NIH3.5TBIntel Xeon Gold 6230 (2/node),
High Memory Compute16322.10GHz768GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL MRI3.5TBIntel Xeon Gold 6130 (2/node)
Compute Large Scratch1322.0GHz256GBInfiniband (QL)IB CPU-X7550 INTEL1.3TBIntel Xeon X7550`
Compute Large Scratch8322.13GHz256GBInfiniband (QL)IB CPU-E7-4830 INTEL3.1TBIntel Xeon E7-4830 (4/node)
AMD Compute Large Scratch8322.20GHz256GBInfiniband (QL)IB CPU-6132HE AMD3.1TBAMD Opteron 6132HE (4/node)
High Memory Compute2322.13GHz512GBInfiniband (QL)IB CPU-E7-4830 INTEL3.1TBIntel Xeon E7-4830 (4/node)
GPU Compute16562.0GHz512GBInfiniband (HDR)IB CPU-Gold-6330 INTEL
875GBIntel Xeon Gold 6330 (2/node)
GPU Compute8402.1GHz192GBInfiniband (M)IB CPU-Gold-6230 V100 INTEL NIH845GB

Intel Xeon Gold 6230 (2/node),

NVidia Tesla V100 16GB (2/node)

GPU Compute16322.1GHz192GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL V100 MRI827GBIntel Xeon Gold 6130 (2/node), NVidia Tesla V100 32GB(2/node)
* HPC NETWORKS (High Speed Interconnects): Infiniband (M) = Mellanox, Infiniband (QL) = Q-Logic, Infiniband (HDR), Intel OmniPath (OPA).  All nodes are also on the Ethernet service network.

Academic Partitions - Detailed Hardware Specs by Node Type

Industry Partition - Hardware Specs

How To Request Specific Hardware When Running Slurm Jobs

Using snodes command to see what's available