GRANT INFORMATION

Below is an outline of Research Computing Facilities. Customized letters of support for grants, etc. can be made upon request by emailing research@unc.edu.

Overview

The UNC ITS Research Computing (RC) team operates substantial on-premises data and computing capability delivering high throughput computing, interactive and batch scheduled data intensive analytics and machine learning environments, traditional high-performance computing (HPC), and secure compliant environments. The RC team includes experts in a variety of research domains who assist in using these on-premise capabilities in addition to facilitating access to national (NSF, DOE, NIH, etc.) and commercial cloud resources.

Consultation and Engagement

Consultation and infrastructure are provided free of charge to the Carolina research community. Research Computing includes an “Engagement Team” of experie ced scientists who are also adept with various computational, information-processing, and data management techniques. The Engagement Team is loosely organized by disciplinary families:

  • Physical, Information, Mathematical, Computer Science
  • Life and Environmental Science
  • Health Outcomes and Clinical Research
  • Economics, Statistics, Social and Behavioral Science, Business
  • Humanities

Engagement team members perform three general functions: (i) user/group onboarding, (ii) disciplinary/project outreach, (iii) advanced consultations. The Engagement Team also conducts select short course training. Contributions by engagement team members range from co-investigation and article co-authorship to assisting lab teams with cluster usage, to collaborating on scientific workshops.

Institutional Research CyberInfrastructure

In addition to fair-share moderated service to the community, the RC platform provides a framework in which principal investigators may cost effectively invest in dedicated resources embedded within the RC platform that are centrally managed and leverage economies of scale for scenarios where dedicated scale resources are required by a particular PI group. The free-tier infrastructure includes:

  • 15 petabytes of high-speed cluster attached storage; Dell EMC Powerscale
  • 10 PB of enterprise storage; NetApp
  • 5 petabytes cluster scratch SSD storage; VAST
  • 32k batch scheduled HTC compute cores (Longleaf cluster)
  • 15k batch scheduled HPC compute cores (Sycamore cluster)
  • 130 Nvidia L40S GPUs; 30 Nvidia A100’s; 64 Nvidia Volta GPUs; 32 Nvidia 1080’s
  • Two 8-way SXM 80GB A100 GPU nodes
  • 5 high memory nodes with 3TB RAM each
  • 5 single node parallel nodes, each with 192 cores and 1.5TB RAM, single large jobs at a time
  • Open OnDemand Web based interface for GUI and GPU enabled Linux apps for dev/test/repl
  • Commodity cloud storage for cold archival data
  • Secure Enclave Capabilities supporting externally audited NIST 800-53 R5

The Sycamore cluster was deployed in 2025 and includes 15,000 AMD EPYC cores; 1.5 TB Ram per node; an NDR-200 InfiniBand interconnect with 8 of these NDR connections into Vast based scratch storage system. This system is water cooled.

Software

Both commercial and open-source software are available on RC clusters and are managed via the “modules” framework. At present, there are close to 500 modules available.

 

Last Update 12/26/2025 1:50:28 PM