The UB-HPC (Academic) cluster consists of a heterogeneous set of nodes purchased over the years as grants were secured and funds became available.  We provide a list below of the types of nodes and rough quantity numbers.  Keep in mind the older hardware is removed from service as it becomes impossible to repair so the quantities listed here may not be accurate.  You can get an exact accounting of what is currently available in the UB-HPC cluster by running the 'snodes' command on the command line.  More info on snodes

Type of Node


Cores /Node Clock RateRAMHigh Speed Network*Slurm FeaturesLocal /scratchCPU/GPU Details
Compute (Dell)96402.10GHz192GBInfniband (M)IB CPU-Gold-6230 INTEL NIH835GBIntel Xeon Gold 6230 (2/node)
Compute (Dell)86322.10GHz192GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL MRI827GBIntel Xeon Gold 6130 (2/node)
Compute (Dell)34162.20GHz128GBInfiniband (M)IB CPU-E5-2660 INTEL773GBIntel Xeon E5-2660 (2/node)
Compute (Dell)372122.40GHz48GB



IB CPU-E5645 INTEL884GBIntel Xeon E5645 (2/node)
Compute (Dell)12882.13GHz24GBNoneCPU-L5630 INTEL268GBIntel Xeon L5630 (2/node)
Compute (IBM)12882.27GHz24GBNone
CPU-L5520 INTEL268GBIntel Xeon L5520 (2/node)
High Memory Compute 24402.10GHz768GB



IB CPU-Gold-6230 INTEL NIH3.5TBIntel Xeon Gold 6230 (2/node),
High Memory Compute16322.10GHz768GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL MRI3.5TBIntel Xeon Gold 6130 (2/node)
High Memory Compute1322.0GHz256GBInfiniband (QL)IB CPU-X7550 INTEL1.3TBIntel Xeon X7550`
High Memory Compute 8322.13GHz256GBInfiniband (QL)IB CPU-E7-4830 INTEL3.1TBIntel Xeon E7-4830 (4/node)
High Memory Compute8322.20GHz256GBInfiniband (QL)IB CPU-6132HE AMD3.1TBAMD Opteron 6132HE (4/node)
High Memory Compute2322.13GHz512GBInfiniband (QL)IB CPU-E7-4830 INTEL3.1TBIntel Xeon E7-4830 (4/node)
GPU Compute8402.1GHz192GBInfiniband (M)IB CPU-Gold-6230 V100 INTEL NIH845GB

Intel Xeon Gold 6230 (2/node),

NVidia Tesla V100 16GB (2/node)

GPU Compute16322.1GHz192GBOmniPath (OPA)OPA CPU-Gold-6130 INTEL V100 MRI827GBIntel Xeon Gold 6130 (2/node), NVidia Tesla V100 32GB(2/node)
* HPC NETWORKS (High Speed Interconnects): Infiniband (M) = Mellanox, Infiniband (QL) = Q-Logic, Intel OmniPath (OPA).  All nodes are also on the Ethernet service network.

Detailed Hardware Specs by Node Type

How To Request Specific Hardware When Running Slurm Jobs

Using snodes command to see what's available