Patron Nodes and other patron infrastructure

How to purchase dedicated systems within Longleaf, leveraging the broader infrastructure.

What is a Patron Node?

Research Computing’s clusters are a shared institutional resource and the campus-wide, general account is not designed to provide high levels of dedicated service to any one group. A “patron node” model is available to campus research investigators who require more resources than can be provided for free and want ITS Research Computing to maintain them.

A “patron node” is a way to reserve a section of Research Computing’s resources for your (or your team’s) exclusive use. It is a node (or collection of nodes) in Research Computing’s longleaf cluster that is purchased directly by a project, grant, faculty or non-ITS staff and access is controlled by one person (Prinicipal Investigator or PI). This is dedicated computing space reserved for use by a small number of users. Unlike general cluster access, users on a patron node do not share the resource with anyone other than their teammates.

The purchase price is the rate specified in a vendor quote obtained by Research Computing. In this model, there are no expenses for the PI beyond the initial purchase – all maintenance, management, networking, network ports, integration with ITS Research Computing file systems, etc. is included. The patron node’s environment, while exclusive to the PI’s team, is otherwise identical to the general research computing cluster environment: Patron node users start by establishing ssh connections to longleaf, use SLURM for job control, use files located on longleaf (e.g. home, proj, scratch or mass storage), load modules available on longleaf, work with ITS Research Computing staff for software needs and comply with Research Computing security updates.

How Do I Ask for My Project’s Own Patron Node?

The process starts with an email to from a team’s staff/faculty leader (e.g. Principal Investigator or PI) asking for a patron node. After collecting the patron node requirements, ITS and PI will arrive at mutual agreement upon on a specific configuration/price for the nodes, which is based on optimizing the variables (# of cores, amount/speed of RAM, etc) per needs. Then, ITS will purchase the equipment using the PI’s chartfield string.

Storage for patron use is more complex than compute given the scale of storage systems; it is not viable to add-on a few hundred TB's of storage capacity, as these systems must be considered in terms of PB's. Any data storage needs above the level provided to investigators for free will be negotiated individually. Storage must also be considered in terms of required performance. There are three cost/performance tiers of storgae availalbe, from fastest/most-expensive to least: /work, /proj, cold archive.

Patron equipment will be available exclusively to the PI’s team and supported by Research Computing for at least 5 years. The equipment will be owned, operated, managed, secured, maintained, and supported by ITS.

How Do I Access My Patron Node?

Access to the patron node is made via a new Longleaf partition in the SLURM scheduler. Only those users approve by the PI may submit jobs to the partition.

Are There Other Ways To Customize a Project Solution?

If this “patron node” structure does not meet a researcher’s project needs and the PI prefers to manage the project’s infrastructure separately, custom solutions using the ITS Manning Data Center facilities can be arranged. Please contact to start that conversation.


Last Update 5/20/2024 5:43:39 AM