The University of Florida supercomputer is a cluster that includes the latest generation of processors and offers nodes for memory-intensive computation. HiPerGator’s high-performance storage systems can be accessed from diverse interfaces, including Globus, UFApps for Research, and other tools.

UFIT Research Computing maintains the cluster and its many parts, allowing researchers to focus on their research instead of hardware and software maintenance. UFIT Research Computing supports supports a significant number of widely-used applications. Our staff is happy to evaluate and explore additional applications for UF’s research needs.


HiPerGator can be used by UF faculty and by faculty at colleges and universities in Florida for teaching and research using these options and procedures:

  1. For teaching a class, allocations are free and they last one semester. See classroom support for detailed instructions.
  2. For research, allocations can be purchased for periods ranging from three months to several years. The rates are listed on our price sheets 
  3. A three month trial allocation at no cost may be requested for developing a course in advance of teaching the course and to explore the use of HPC for research. File a trial allocation request After the trial ends, please work with UFIT Research Computing staff to find the best way forward for continuing use of HiPerGator.
  4. To learn about HiPerGator capabilities, colleges and departments can request a free 3-month trial allocation shared between all faculty in the unit to get easy access for learning about HPC and preparing to include HPC in their courses at no cost to individual faculty.  

Note that the HiPerGator operation and infrastructure has been operating successfully on this model since 2013 with significant investment from the provost, the VP for research, and the CIO. 

HiPerGator 3.0

HiPerGator 3.0 construction was completed mid-2021. A new Blue storage system replaced /ufrc as well, and users are running their jobs on the new cluster in conjunction with the resources that HiPerGator 2.0 still provides.

HiPerGator Configuration

Phase Year Cores


HiPerGator 2.0 2015 30,000 Intel


HiPerGator 3.0 Jan 2021 30,720 AMD EPYC 7702 Rome
2.0 GHz Cores


HiPerGator 3.0 May 2021 9,600 AMD EPYC 75F3 Milan
3.0 GHz Cores


  • Total of 70,320 cores
  • HiPerGator 3.0 has:
    • 608 NVIDIA RTX 2080TI and RTX 6000 GPU’s
    • 4 Petabytes (PB) of Blue fast storage

HiPerGator AI NVIDIA DGX A100 SuperPod

Cluster Information

  • 140 NVIDIA DGX A100 nodes
  • 17,920 AMD Rome cores
  • 1,120 NVIDIA Ampere A100 GPUs
  • 2.5 PB All-Flash storage
  • Over 200 HDR Infiniband and various Ethernet switches for connectivity
  • Double precision LinPack (HPL): 17.2 Petaflops
    • TOP500 June 2021: Ranked #22
    • Green500 June 2021: Ranked #2
  • AI Floating Point Operations: 0.7 Exaflops

Node Information

  • 2x AMD Epy 7742 (Rome) 64-Core processors with Simultaneous Multi-Threading (SMT) enabled, presenting 256 cores per node
  • 2TB RAM
  • 8x NVIDIA A100 80GB Tensor Core GPUs
  • NVSWITCH technology that supports integration of all 8 A100 GPU’s in a system with unified memory space
  • 10x HDR IB non-blocking interfaces for inter-node communication
  • 2x 100GbE ethernet interfaces
  • 28TB NVMe local storage

The latest NVIDIA GPU technology of the Ampere A100 GPU has arrived at UF in the form of an NVIDIA SuperPod. UF is the first university in the world to get to work with this technology. Visit the UF Artificial Intelligence Initiative website for more information.

The A100 technical specifications can be found at the NVIDIA A100 Website, in the DGX A100 User Guide, and at the NVIDIA Ampere developer blog.

For A100 benchmarking results, please see the HPCWire report.