User Information

attention The UF Spam Quarantine may block some SLURM report emails. If this happens, you should receive an email from UF Spam Quarantine containing details about the message in question. Click the “Safelist” hyperlink to make sure all future SLURM report emails are delivered to your inbox. To manually safelist the address:

  • Log in to spam.mail.ufl.edu
  • Click “Lists,” then “Safe Senders List,” then “New”
  • Type slurm@rc.ufl.edu in the field that appears, then click “Save”

For more details, please view the page on Managing Spam for Proofpoint Users.

Login information

Login host is: hpg2.rc.ufl.edu

Your username and password are both managed by Gatorlink. Please use the same credentials as you would for other UF logins.

Checking group membership

Each user may be a member of one or more groups. By default, your primary group will be that of the sponsor listed at the time of account application. Your directory in /ufrc will be under that sponsor’s group directory. To see what groups you are a member of, you can use the id command:

[janesmith@gator3 ~]$ id janesmith
uid=9999(janesmith) gid=9999(smith) groups=9999(smith),8888(test)

This output for the user janesmith indicates that she has the primary group “smith”, and is also a member of the group “test”.

Submitting jobs

Please check our Help & Documentation site (links under the “Batch System” heading) for more detailed documentation.

Users with only one group do not need to specify an account.

The burst capacity (allowing a group to use additional resources when idle resources are available) is not automated on HiPerGator 2. For each group, there are two Quality of Service (QOS) levels: the investment QOS, named after the group and the burst QOS named with a -b after the group name.

The investment QOS is set to the investment allocation, the burst QOS is at the burst limit. The priority on the burst QOS is lower than the investment QOS, and wall time is limited to 4 days (96 hours). You can decide which QOS to select or the job will use the investment QOS by default.

To select the burst QOS, use (substituting your group name followed by –b):

#SBATCH --qos=<groupname>-b

The time limit on the burst QOS is 4 days.

Users with secondary groups can submit jobs to the secondary group by specifying the group with the –account=group directive:

#SBATCH --account=<groupname>
#SBATCH --qos=<groupname>

This can also be used in conjunction with the –qos=<groupname>-b directive as above.

Storage

The primary filesystem for data and job i/o on HiPerGator 2 is /ufrc. It currently has 1 PB of space and will be expanded to 3 PB as HiperGator 1 is merged with HiperGator 2. Until that happens, we will need to be conservative in our use of /ufrc disk space.

Remember that neither /scratch/lfs nor /ufrc are backed up unless you have made arrangements to do so.

The default /ufrc path for each user is:

/ufrc/<groupname>/<username>

NFS-based storage on HiPerGator systems is typically auto-mounted, meaning it is dynamically mounted only when users are actually accessing them. For example, if you have an invested folder named /orange/smith, you must to specifically type in the full path of “/orange/smith” to be able to see the contents and access them. Directly browsing /orange will not show the smith sub-folder (unless someone else is using it coincidentally). Auto-mounted folders are common on HiPerGator systems, they include /orange, /bio, /rlts and even /home.

Transferring data

SFTP: The host for data transfer activities such as SFTP, scp and rsync is sftp.rc.ufl.edu; all data transfer activities should go through this host. For additional information see: https://wiki.rc.ufl.edu/doc/Transfer_Data

Globus: Globus is a high-performance data transfer tool. For additional information see: https://wiki.rc.ufl.edu/doc/Globus

Should you encounter problems with either of these methods, please open a support request and we will help you evaluate your data transfer needs and the appropriate method to move your data.

Development partition

Rather than logging into development servers, users will now request an interactive session in the development partition (hpg2-dev): e.g. for a single core with 2Gb RAM for 10 minutes:

srun -p hpg2-dev --pty -u bash -i

Alternatively, load the ‘ufrc’ module and execute ‘srundev’.

To get 8 cores and 6gb RAM per core for 1 hour run:

srun -p hpg2-dev -n 8 --mem-per-cpu 6gb -t 60:00 --pty -u bash -i

or

srundev -n 8 --mem-per-cpu 6gb -t 60:00

Please open support requests for any problems you encounter or for any questions not addressed on our website or wiki.