GETTING STARTED ON LONGLEAF
Table of Contents
Longleaf is a high throughput computing cluster available to researchers, students, faculty and staff across the University. With over 300 nodes, dedicated GPU partitions and highly performant storage, it is optimized for memory- and I/O-intensive, loosely coupled workloads with an emphasis on aggregate job throughput over individual job performance. Longleaf is particularly suited for serial workloads, consisting of many jobs each requiring a single compute host.
Longleaf users can also access the cluster though Open OnDemand, a web-based portal offering job submission, file system browsing and many preconfigured applications.
- Operating System:
- Red Hat Enterprise Linux Server release 8
- Resource management:
- Job submissions are handled by the Slurm batch processing scheduler
General Partition Nodes
The several general partitions (general/interact, general_big/hov, bigmem, snp, etc.) are
comprised of the following nodes:
|Node Count||CPU Count||MEMORY (GB)|
Open OnDemand Portal
Open Ondemand is a web-portal that provides a terminal, file browser, and graphical interface for certain apps on the Longleaf cluster.See here for further information.
GPU Hardware and Partitions
Please refer to our page detailing use of GPUs and best practices.
Follow the steps listed on Request a Cluster Account page and select Longleaf Cluster under subscription type. You will receive an email notification once your account has been created.
Linux users can use ssh from within their Terminal application to connect to Longleaf.
If you wish to enable x11 forwarding use the “–X” ssh option. Be sure to use your UNC ONYEN and password for the login:
ssh -X <onyen>@longleaf.unc.edu
Windows users should download MobaXterm (Home Edition). Then use the Session icon to create a Longleaf SSH session using longleaf.unc.edu for “Remote host” and your ONYEN for the “username” (Port should be left at 22).
Mac users can use ssh from within their Terminal application to connect to Longleaf. Be sure to use your UNC ONYEN and password for the login:
ssh -X <onyen>@longleaf.unc.edu
To enable x11 forwarding Mac users will need to download, install, and run Xquartz on their local machine in addition to using the “–X” ssh option. Furthermore, in many instances for x11 forwarding to work properly Mac users need to use the Terminal application that comes with Xquartz instead of the default Mac terminal application.
A successful login takes you to “login node” resources that have been set aside for user access. The login node is where you will edit your code, execute basic UNIX commands, and submit your jobs from to the SLURM job scheduler.
DO NOT RUN YOUR CODE OR RESEARCH APPLICATIONS DIRECTLY ON THE LOGIN NODE. THESE MUST BE SUBMITTED TO SLURM!
In order to connect to Longleaf from an off-campus location, a connection to the campus network through a VPN client is required.
NAS home space
Your home directory will be in /nas/longleaf/home/<onyen> and is backed up via snapshots.
Your home directory has a quota which you will want to monitor occasionally: 50 GB soft limit and a 75 GB hard limit.
Your /work directory will be: /work/users/<o>/<n>/<onyen> (the “o/n/” are the first two letters of your ONYEN) with a quota of 30 TB.
/work is built for high-throughput and data-intensive computing, and intended for data actively being computed on, accessed and processed by systems such as Longleaf
For inactive data, please move it to /proj - contact us if you have no place or insufficient quota to move inactive data off of /work.
File systems such as /work in an academic research setting typically employ a file deletion policy, auto-deleting files of a certain age. At this time, there are no time limits for files on /work. We rely upon the community to maintain standards of appropriate and reasonable use of the various storage tiers.
“/proj” space is available to PIs (only) upon request. The amount of /proj space initially given to a PI varies according to their needs. Unlike /pine scratch space there is no file deletion policy for /proj space, but users should take care in managing the use of their /proj space to stay under assigned quotas. Note that by default /proj space is not backed up.
For further information and to make a request for /proj space please email firstname.lastname@example.org.
Modules are essentially software installations for general use on the cluster. Therefore, you will primarily use module commands to add and remove applications from your Longleaf environment as needed for running jobs. It’s recommended that you keep your module environment as sparse as possible.
Applications used by many groups across campus have already been installed and made available on Longleaf. To see the full list of applications currently available run
Users are able to create their own modules.
Job submission is handled using SLURM. In general there are two ways to submit a job. You can either construct a job submission script or you can use a command-line approach.
Method 1: The Submission Script Create a job submission script using your favorite UNIX editor: emacs, vi, nano, etc. If you don’t have a favorite editor, you can use nano (for now).
The following script contains job submission options (the #SBATCH lines) followed by the actual application command.
In this example, you would enter the following into your script (Note: that each SBATCH switch below has two ‘-‘ characters, not one):
#!/bin/bash #SBATCH --ntasks=1 #SBATCH --time=1:00 #SBATCH --mem=100 hostname
In this example, the command you are running is the “hostname” command and the the job submission options request 100MB of memory, 1 core, and a one minute time limit.
To learn more about the many different job submission options feel free to read the man pages on the sbatch command:
Save your file and exit
nano. Submit your job using the
The equivalent command-line method would be
sbatch --ntasks=1 --time=1:00 --mem=100 --wrap="hostname"
For application specific examples of each method see this help doc.
For your job submissions, you will need to specify the SBATCH options appropriate for your particular job. The more important SBATCH options are the time limit (––time), the memory limit (––mem), and the number of cpus (––ntasks), and the paritition (–p). There are default SBATCH options in place:
- The default partition is general.
- The default time limit is one hour.
- The default memory limit is 4 GB.
- The default number of cpus is one.
Longleaf is set up into different SLURM partitions which you can see by running the command
If you’ve read the section on System Information then you are familiar with the different node types available on Longleaf. The different nodes are available through different SLURM partitions:
- the general compute nodes are suitable for most user jobs and are available in the general partition.
- the large memory compute nodes are for jobs that need a very large amount of memory are accessible through the **bigmem **partition. Please email email@example.com to request access to this partition.
- the GeForce GTX1080 gpus are accessible via the gpu partition; the Tesla V100-SXM2 gpus are accessible via the volta-gpu partition. See the GPU page for more details on using this resource.
- The snp (Single Node Parallel) partition is designed to support parallel jobs that are not large enough to warrant multi-node processing on Dogwood, yet require a sufficient percentage of cores/memory from single node to be worthwhile scheduling a full node. Please email firstname.lastname@example.org to request access to this partition.
- the interact partition is for jobs that require an interactive session for running a GUI, debugging code, etc.
To see the count of CPUs in each node allocated to each partition: sinfo -a -N –format=”%N %c %P”
To see the status of all current jobs on Longleaf:
To see the status of only your submitted jobs on Longleaf:
squeue -u <onyen>
where you’ll need to repalce <onyen> with your actual ONYEN.
To cancel a submitted job:
where you’ll need to replace <jobid> with the job’s ID number which you can get from the squeue command.
To check the details of a completed job:
sacct -j <jobid> --format=User,JobID,MaxRSS,Start,End,Elapsed
where you’ll need to replace <jobid> with the job’s ID number. The items listed in ––format are specified by the user. In this example, we’ll get the user name, job ID, maximum memory used, start time, end time, and elapsed time associated with the job.
For more information on the sacct command
To check limits: To ensure fair usage, resource limits (e.g. time, space, # cpus, # jobs) are set in SLURM. System administrators set SLURM resource limits for each partition and qos. In addition, there are limits users and groups (PIs) have [limits are set generally for all users and do not vary by user; same with groups].
To see the current limit settings by partition, use the scontrol SLURM command:
scontrol --oneliner show partition or scontrol --oneliner show partition general
To see the current limit settings by qos, use the sacctmgr SLURM command:
sacctmgr show qos format=name%15,mintres,grptres,maxtres%20,maxtrespernode,maxtrespu%20,maxjobs,mintres,MaxSubmitJobsPerUser
To see the current limit settings applied to groups, use the sacctmgr SLURM command:
sacctmgr show account withassoc where name=<pi_group name of the form: 'rc_' + pi_name + '_pi'>
To find the PI group name assigned to your account, use the rc_*_pi name listed for your onyen in this command: sacctmgr show association user=<onyen>
To see the current limit settings applied to users, use the sacctmgr SLURM command:
sacctmgr show association user=<onyen> format=GrpTRES%40
Note: System administrators frequently tweak all settings in SLURM to tailor the cluster to the current workload.
Last Update 11/29/2023 4:41:19 PM