CCR's  compute clusters are made up of over 11,000 CPUs in various configurations.  Please see our website for detailed information about the compute node hardware in the academic cluster and the industry cluster.  Users can allow the scheduler to pick where their jobs are run or they can specify required resources such as memory, CPU type, GPU presence, etc.  The following table displays many different options for requesting specific resources in your SLURM script.  For more information about writing SLURM scripts, please see the KB article for SLURM.

Node Resources Sample SLURM Directives
Multi-node with InfiniBand

--nodes=2 --ntask-per-node=12 --constraint=IB


--nodes=2 --ntasks-per-node=12 --constraint=IB&CPU-E5645

32 Core Nodes  
512GB nodes --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=512000
256GB nodes (INTEL Processors) --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=256000 --constraint=CPU-E7-4830
256GB nodes (AMD Processors) --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32 --mem=256000 --constraint=CPU-6132HE
Any 32 core nodes --partition=largemem --qos=largemem --nodes=1 --ntasks-per-node=32
16 Core Nodes
Nodes with 16 cores --nodes=1 --ntasks-per-node=16 --constraint=CPU-E5-2660
Nodes with at least 16 cores --nodes=1 --ntasks-per-node=16
12 Core Nodes  
Nodes with 12 cores --nodes=1 --ntasks-per-node=12 --constraint=CPU-E5645
Any nodes with at least 12 cores --nodes=1 --ntasks-per-node=12
8 Core Nodes  
Nodes with 8 cores (IBM) --nodes=1 --ntasks-per-node=8 --constraint=CPU-L5520
Nodes with 8 cores (DELL) --nodes=1 --ntasks-per-node=8 --constraint=CPU-L5630
Any nodes with at least 8 cores --nodes=1 --ntasks-per-node=8
Any nodes with at least 48GB
--nodes=1 --mem=48000
Nodes with GPUs --partition=gpu --qos=gpu --nodes=1 --ntasks-per-node=12 --gres=gpu:2