In addition to CCR's large production cluster and visualization hardware which are freely available to UB and affiliated researchers, CCR maintains a wide variety of project-specific storage systems and compute clusters.  The machine room space, cooling, and networking available at CCR along with the technical expertise in system administration and programming allow UB researchers to devote their time to important research rather than cluster/systems maintenance.  

While some research groups may be able to utilize the research cloud as their most cost-effective solution, others may require dedicated hardware nodes.  Faculty (PI) clusters are maintained at CCR that contain hardware purchased by faculty for use by a specific research group.  CCR staff will consult with you to determine your computing needs and contact vendors for competing quotes.  


  1. We require certain networking and management capabilities but this is standard (i.e. IPMI tools) and we deal with the vendors on these items.
  2. We charge a small maintenance fee (or co-location fee) for all of the CCR infrastructure to run your node (rack space, cables, network switches, etc).  Electricity and cooling is included.  As of 2018 this fee is $325 per "u" - which is the standard slot in rack.  If servers purchased are more than 1 U, maintenance fee is multiplied.  For example, if the node you purchase takes up 2 U in rack space, the fee charged by CCR is $650.  This is a one-time fee and we will maintain your equipment for the length of the warranty.   
  3. The minimum warranty we require is 3 years next-business day parts & service.  We strongly recommend purchasing a 5 year next-business day parts & service warranty for all new equipment as this has become the UB standard for all computing equipment purchased. 
  4. Users of the CCR academic cluster will be permitted to run on your node(s) when they are idle.  We refer to this as "scavenging" nodes and it allows idle processors to work when they otherwise would just sit there.  Users who scavenge nodes run jobs that are able to be stopped and restarted.  As soon as a user with access to your nodes requests them, any scavenger jobs that may be running on them are terminated and your jobs begin.  More details can be found here

 If you are a UB faculty researcher and are interested in buying your own HPC computing or storage resources, we encourage you to contact us by submitting a ticket to CCR help.