LONGLEAF FREQUENTLY ASKED QUESTIONS (FAQS)

  1. How to get an account on Longleaf or Dogwood

  2. What are the details of the new filesystem locations?

  3. Why aren’t /netscr nor /lustre present on Longleaf or Dogwood?

  4. What is the queue structure on Longleaf and Dogwood?

  5. How do I transfer data between Research Computing clusters?

  6. How do I transfer data onto a Research Computing cluster?

  7. How do I access the OnDemand Web Portal for Longleaf?

1. How to get an account on Longleaf or Dogwood

Follow the steps outlined on Request a Cluster Account.

For more information on Longleaf, see: Longleaf Cluster.

For more information on Dogwood, see: Dogwood Cluster.

2. What are the details of the new filesystem locations?

See the Longleaf Directory Spaces page or the Dogwood Directory Spaces page.

3. Why aren't /netscr nor /lustre present on Longleaf or Dogwood?

The /lustre filesystem is available only via the Infiniband fabric, which we had on Killdevil. Since Longleaf and Dogwood nodes in no way access that fabric, /lustre is not present on them.

With respect to net-scratch, /netscr, it is not present on Longleaf and Dogwood for performance reasons. First, computing with the research cluster nodes against /netscr would add an extremely significant workload that /netscr cannot sustain—it would thus severely degrade performance for everyone. Secondly, the /pine filesystem is purpose-built for I/O and balanced/designed for our research clusters: though it may take some effort to move files/data to a filesystem present on our research clusters, your results will be vastly better than doing something else. Third, the quotas on the /pine filesystem are higher, so you have more resource to work with.

4. What is the queue structure on Longleaf and Dogwood?

The queue systems are managed through SLURM partitions, which vary by research cluster:

If you have jobs that require a queue (partition) you do not have access to, please contact us via research@unc.edu or via help ticket at https://help.unc.edu.

5. How do I transfer data between Research Computing clusters?

  • To copy a big file or thousands of small files, use GLOBUS.
  • To copy a medium sized file,do not connect to longleaf.unc.edu (or dogwood.unc.edu), instead connect to one of our data mover nodes to use the cp command. There are four data mover nodes: rc-dm1.its.unc.edu, rc-dm2.its.unc.edu, rc-dm3.its.unc.edu, and rc-dm4.its.unc.edu. Connecting to the general the host address: rc-dm.its.unc.edu will connect you to the least busy of the four. This will generally result in the best performance. You may connect using ssh or any data transfer command (some common ones include sftp, scp, and rsync) or ftp client app: Mac users can use Fetch. Windows users can use a variety of ftp clients such as WinSCP, CoreFTP, or the ftp client that comes with MobaXterm. SecureCRT also has built-in file transfer capabilities.
  • To copy small sized files use the cp command from a login node on longleaf.unc.edu or dogwood.unc.edu.

6. How do I transfer data onto a Research Computing cluster?

For transfers from your desktop or home computer, or another computer external to Research Computing, to one of the Research Computing, there are several methods:

 

Last Update 7/27/2024 2:41:11 AM