NOTE:  See the bottom of this page for a pdf version of this policy



CCR maintains custom servers and compute nodes for faculty research groups, in addition to our large production clusters and visualization hardware, which are freely available to UB and affiliated researchers. The machine room space, cooling, and networking available at CCR along with the technical expertise in system administration and programming allow UB researchers to devote their time to important research rather than cluster/systems maintenance.



CCR is able to provide the following options to faculty groups:


1.  Dedicated compute nodes:  

  • Great for users who want dedicated compute cluster resources built into the existing SLURM batch scheduler environment
  • NO wait time! (aside from the time it takes to schedule and launch a job)
  • Longer wall time than academic cluster (up to 30 days)
  • Access to same infrastructure as academic cluster (home and project directories and high speed scratch storage)
  • One-time fee charged to cover cost sharing of infrastructure used to house and support your equipment
  • Give back to the UB research community by providing idle compute cores to other users for their research use

2. Lake Effect Infrastructure-as-a-Service (IAAS) research cloud:

  • Great for research groups that require special software, databases, or websites or that want to control their own servers and software installation
  • Do-it-yourself cloud instances (virtual machines) – faculty group is responsible for operating system and software installation, system maintenance and security.
  • The low cost and great flexibility of this option makes it desirable to many research groups.
  • Consumption based pricing – only pay for what you use
  • Access to various hardware, memory and storage combinations without the expense of having to purchase dedicated machines
  • Compatible with Amazon Web Services (AWS) giving researchers the ability to scale to AWS, if desired.
     

3. CCR Professional Services – CCR managed cloud instances or dedicated hardware:

  • For researchers that do not wish to run within the cluster environment and do not want to maintain their own cloud instances
  • Your group will NOT be allowed administrative access to install your own software
  • One-time fee charged to cover cost sharing of infrastructure used to house and support your equipment (not applicable to managed cloud instances)
  • Additional fees for installation, maintenance, and consulting are charged based on the scope of the project. 

NOTE:  We recommend that you check with your departmental IT support staff to see if they offer this service before engaging with CCR.  CCR offers support for non-standard research computing that other support teams are unable to provide.




Comparing the Options:


Option

Cost

PI Responsibility

CCR Responsibility

Pros/Cons

Dedicated compute nodes
$325/u one time fee
None
Physical hardware install/maintenance/repairs, operating system and software install, updates, security
Same setup as compute nodes in UB academic cluster, access to CCR’s network attached storage (home & project directories, high speed scratch space), no long term storage on local disks, no local software installations, maximum job runtime of 30 days, maintenance downtimes every 60 days, no local hardware backups, no administrative access
Lake Effect IAAS Research Cloud
- $400 per 8760 CPU hours (approximately 1 year of 1 CPU)
- $100/TB of storage
- $65/hr consulting fee (optional)
Faculty group is entirely responsible for OS & software install, setup, updates & security
Responsible for cloud infrastructure only
No backups provided, no maximum run time, 1-2 annual maintenance downtimes, full administrative access, NO access to CCR’s network attached storage (home & project directories, high speed scratch space)
CCR managed cloud instances or dedicated hardware
- cost of hardware - $325/u one time fee for hardware (not cloud instances)
- $1000-$2600 initial fee for requirements gathering, quote & order facilitating, installation & configuration of software & other requirements outside standard CCR environment (see fine print details below)
- $65/hour consulting fee for additional requirements above initial fee
None
Physical hardware install/maintenance/repairs, operating system & software install, updates, security
Quarterly maintenance  downtimes, optional daily backups, no administrative access, optional access to CCR’s network attached storage (home & project directories, high speed scratch space)




The fine print and specifics for each option


Dedicated Compute Nodes:

Dedicated resources (nodes) for a faculty group or research lab built into the existing SLURM batch cluster environment

  • Purchasing:  CCR staff will consult with you to determine your computing needs, provide specs for equipment that align with our data center technical requirements, and contact vendors for competing quotes.  Based on 20 years of HPC experience, we have a list of vendors to recommend for not only their quality hardware but also their exemplary support.  NOTE: We reserve the right to refuse to house hardware from vendors that we have had difficulty working with in the past.

  • Infrastructure (co-location) fee:  CCR charges a one-time fee for the infrastructure to run your compute node (rack space, cables, Ethernet network switches, electricity and cooling). As of 2018 this fee is $325 per rack "u" - which is the standard slot in a rack. If machines purchased are more than 1 “u”, the infrastructure charge is multiplied. For example, if the machine you purchase takes up 2 “u” in rack space, the infrastructure charge is $650. This one-time charge goes towards cost-sharing of the infrastructure used to house and support your equipment.

  • Support:  CCR staff provide support during regular business hours, Monday through Friday 8am-5pm excluding University at Buffalo holidays.  No weekend support is provided.  Emergency support is provided off-hours for critical infrastructure outages only (i.e. storage, networking, batch scheduler, cooling, electric, and cloud infrastructure [not instances]).  Requests for support are to be submitted through the CCR help portal and are handled on a first come, first served basis.  We strive to respond within 48 hours to all requests for help.  Though hardware and software installations and configurations may take longer, they are usually completed within one week.

  • Warranty Requirements & Maintenance Policy:  
    • The minimum warranty we require is 3 years, next-business day, on-site parts & service. We strongly recommend purchasing a 5 year, next-business day, on-site parts & service warranty for all equipment as this is the standard for computing equipment purchases at UB.
    • The maximum age of hardware CCR will support is 7 years.  When equipment reaches this age, it is decommissioned and recycled.
    • For hardware failures that are off warranty but less than 7 years old, CCR will provide basic troubleshooting and diagnostics to determine the cause of any errors.  The cost to troubleshoot and repair hardware outside of warranty is $65/hour with a minimum of 4 hours, pre-paid via interdepartmental invoice (IDI).  The faculty owner is responsible for the ordering and cost of replacement parts once off warranty.  There is no guarantee CCR staff will be able to repair failed aging hardware; however, we will make our best effort to do so.  

  • Installation and configuration:
    • Compute nodes are installed with the same image used by CCR for all compute nodes.  You will not have the ability to choose your operating system or additional software installed on the compute nodes.
    • Compute nodes have access to CCR’s internal network attached storage (project & home directories) and high speed scratch storage.
    • Compute nodes are not backed up.  No permanent data storage is permitted on the local hard drives of compute nodes.  Users must use the network attached storage systems for storing data.
    • No system administrative (elevated) privileges are given to the hardware owner or group members on compute nodes.  

  • Access to compute nodes:
    • You control who has exclusive access to your nodes through the Coldfront subscription management portal.  Users of the CCR academic cluster will be permitted to run on your nodes when they are idle. We refer to this as "scavenging" nodes and it allows idle processors to work when they otherwise would sit inactive. Users who scavenge nodes run jobs that are able to be stopped and restarted. As soon as a user in your group with access to your nodes requests them, any scavenger jobs that may be running are terminated and your jobs begin with NO WAIT TIME. More details can be found here
    • Compute nodes are not accessible to the external network; they can be reached only through the SLURM batch scheduler from the cluster front-end server.  We can not open up ports or provide access outside the CCR  network. 
    • Maintenance downtimes are required every 60 days on dedicated compute nodes for software and operating system updates.  Emergency downtimes may occur as needed for critical security patches.

Lake Effect IAAS Research Cloud:

Cloud environment that provides researchers the ability to launch fully customizable instances (virtual machines) when they’re needed and only pay for what is consumed.  

  • Installation, Configuration & Maintenance:
    • CCR provides support for the hardware and network infrastructure that backs the cloud
    • Users of the cloud (faculty project leaders & their groups) are responsible for all operating system and software installations, as well as updates and security on their cloud instances. Faculty must comply with UB server security requirements and should consult with UBIT for specifics and assistance.
    • Cloud users (faculty project leaders & those they permit in their groups) have full administrative access to their instances (virtual machines)
    • Operating systems currently supported in the research cloud are CentOS and Ubuntu.  Windows is not currently supported due to the complex Microsoft licensing requirements but may be in the future.  Contact us for information on uploading your own images to the cloud.
    • Cloud instances do not have access to CCR’s internal network attached storage (project & home directories) or high speed scratch storage.
    • Cloud instances are not backed up.  It is the users’ responsibility to backup all data in the research cloud.
    • Maintenance downtimes twice annually may be required on the cloud infrastructure for software and operating system updates.  Emergency downtimes may occur as needed for critical security patches.

  • Support:  CCR staff provide support during regular business hours, Monday through Friday 8am-5pm excluding University at Buffalo holidays.  No weekend support is provided.  Emergency support is provided off-hours for critical infrastructure outages only (i.e. storage, networking, batch scheduler, cooling, electric, and cloud infrastructure [not instances]).  Requests for support are to be submitted through the CCR help portal and are handled on a first come, first served basis.  We strive to respond within 48 hours to all requests for help.  Though hardware and software installations and configurations may take longer, they are usually completed within one week.

  • Access:
    • Accounts & access to your group’s cloud project in the Openstack GUI is granted to CCR account holders only.  You control this through the Coldfront subscription management portal. 
    • Once your instance is launched, you control who can login to it using standard Linux account management.
    • Cloud instances are available outside the UB networks.  It is imperative cloud users utilize security standards to lock down their instances (i.e. firewall policies, cloud security groups)

  • Pricing:
    • Cloud usage is based on CPU and storage use.  A single cloud subscription is $400 and covers 1 year of single CPU usage (or 8,760 CPU hours) and 50GB of storage. 
    • Additional storage can be purchased for $100/TB per year. 
    • Details on how CPU usage is calculated as well as pricing for consulting can be found here.


CCR Professional Services: CCR managed cloud instances or dedicated hardware:

Hardware purchased for dedicated use by a faculty group outside the cluster environment or cloud instances managed by CCR for a group or project

  • Setup fee for dedicated hardware or cloud instances:  A minimum of $1300 setup fee is charged for managed cloud instances (the equivalent to 20 hours of staff time). A minimum of $2600 is charged for dedicated hardware to cover the staff time spent on requirements gathering, quote & order facilitating, operating system installation & configuration of software and yearly maintenance (the equivalent of 40 hours).   A custom quote for services will be provided after discussing your requirements.  Any additional requirements above 20 hours of staff time are billed at $65/hour.

  • Purchasing dedicated hardware:  CCR staff will consult with you to determine your computing needs, provide specs for equipment that align with our data center technical requirements, and contact vendors for competing quotes.  Based on 20 years of HPC experience, we have a list of vendors to recommend for not only their quality hardware but also their exemplary support.  NOTE: We reserve the right to refuse to house hardware from vendors that we have had difficulty working with in the past.

  • Infrastructure (co-location) fee for dedicated hardware:  CCR charges a one-time fee for the infrastructure to run your compute node (rack space, cables, Ethernet network switches, electricity and cooling). As of 2018 this fee is $325 per rack "u" - which is the standard slot in a rack. If machines purchased are more than 1 “u”, the infrastructure charge is multiplied. For example, if the machine you purchase takes up 2 “u” in rack space, the infrastructure charge is $650. This one-time charge goes towards cost-sharing of the infrastructure used to house and support your equipment.  This fee is in addition to the setup fee assessed (see above).

  • Warranty Requirements & Maintenance Policy for dedicated hardware:  
    • The minimum warranty we require is 3 years, next-business day, on-site parts & service. We strongly recommend purchasing a 5 year, next-business day, on-site parts & service warranty for all equipment as this is the standard for computing equipment purchases at UB.
    • The maximum age of hardware CCR will support is 7 years.  When equipment reaches this age, it is decommissioned and recycled.
    • For hardware failures that are off warranty but less than 7 years old, CCR will provide basic troubleshooting and diagnostics to determine the cause of any errors.  The cost to troubleshoot and repair hardware outside of warranty is $65/hour with a minimum of 4 hours, pre-paid via interdepartmental invoice (IDI).  The faculty owner is responsible for the ordering and cost of replacement parts once off warranty.  There is no guarantee CCR staff will be able to repair failed aging hardware; however, we will make our best effort to do so.  

  • Installation and configuration:
    • Operating systems supported on servers by CCR are CentOS and RedHat Linux (owner must purchase RedHat license).  Operating systems currently supported in the research cloud are CentOS and Ubuntu.  Windows is not supported at this time due to the complex Microsoft licensing requirements but may be in the future.  Contact us for information on uploading your own images to the cloud.
    • Dedicated servers and cloud instances managed by CCR have the ability to access CCR’s internal network attached storage (project & home directories) and high speed scratch storage, if desired.  See below for security ramifications of this option.
    • No system administrative (elevated) privileges are given to the hardware owner or group members on dedicated servers or cloud instances managed by CCR. 
    • Quarterly maintenance downtime is required on dedicated servers and cloud instances managed by CCR in order to apply software and operating system updates.  Emergency downtimes may occur as needed for critical security patches.
    • Optional daily off-site backups for servers can be arranged through UBIT.  

  • Access to dedicated hardware and managed cloud instances:
    • Accounts & access to your servers and managed cloud instances is granted to CCR account holders only.  You control this through the Coldfront subscription management portal.
    • Dedicated servers on CCR’s network are only accessible to machines on the University at Buffalo’s networks (including the University’s VPN).  We do not allow access to CCR servers from outside the UB networks.  If you require this, you must use a cloud instance.
    • Managed cloud instances follow our standard operating procedure and are restricted to access via SSH and web ports only.  We can work with you to develop an appropriate security plan for your instances based on your project requirements if additional access is desired. NOTE: Any managed cloud instances with access to CCR’s internal network attached storage will NOT be allowed access outside the UB network and will be restricted to SSH and web ports only.

  • Support:  CCR staff provide support during regular business hours, Monday through Friday 8am-5pm excluding University at Buffalo holidays.  No weekend support is provided.  Emergency support is provided off-hours for critical infrastructure outages only (i.e. storage, networking, batch scheduler, cooling, electric, and cloud infrastructure [not instances]).  Requests for support are to be submitted through the CCR help portal and are handled on a first come, first served basis.  We strive to respond within 48 hours to all requests for help.  Though hardware and software installations and configurations may take longer, they are usually completed within one week.