HiPerGator

The University of Florida supercomputer is a cluster that includes the latest generation of processors and offers nodes for memory-intensive computation. HiPerGator’s high-performance storage systems can be accessed from diverse interfaces, including Globus, UFApps for Research, and other tools.

UFIT Research Computing maintains the cluster and its many parts, allowing researchers to focus on their research instead of hardware and software maintenance. UFIT Research Computing supports supports a significant number of widely-used applications. Our staff is happy to evaluate and explore additional applications for UF’s research needs.

HiPerGator 3.0

HiPerGator 3.0 has been growing for nearly a year. Adding GPUs, storage and new processors, HiPerGator 3.0 is taking shape. The GPUs are in production, the new Blue storage has replaced /ufrc and the final phase of HiPerGator 3.0 with AMD EPYC Milan cores has gone into production.

HiPerGator Evolution

Phase Year Cores

RAM/core

HiPerGator 1.0 2013
Retired 2021
16,000 AMD Opteron 6378 Abu Dhabi
2.4 GHz Cores

4GB

HiPerGator 2.0 2015 30,000 Intel Xeon E5-2698 v3 Haswell
3.2 GHz Cores

4GB

HiPerGator 3.0 Jan 2021 30,720 AMD EPYC 7702 Rome
2.0 GHz Cores

8GB

HiPerGator 3.0 Q2 2021 9,600 AMD EPYC 75F3 Milan
2.95 GHz Cores

8GB

  • As of 2021:
    • Total of 66,000 cores
    • 544 NVIDIA GeForce RTX 2080TI GPU’s
    • 48 NVIDIA Quadro RTX 6000 GPU’s

HiPerGator AI NVIDIA DGX A100 SuperPod

Cluster Information

  • 140 NVIDIA DGX A100 nodes
  • 17,920 AMD Rome cores (35,840 with SMT enabled)
  • 1,120 NVIDIA Ampere A100 GPUs
  • 2.5 PB All-Flash common storage accessible by all nodes
  • Over 200 non-blocking HDR Infiniband and various Ethernet switches for connectivity

Node Information

  • 2x AMD EPYC 7742 (Rome) 64-Core processors with Simultaneous Multi-Threading (SMT) enabled presenting 256 cores per node
  • 2TB RAM
  • 8x NVIDIA A100 80GB Tensor Core GPUs
  • NVSWITCH technology that supports integration of all 8 A100 GPU’s in a system with unified memory space
  • 10x HDR IB non-blocking interfaces for inter-node communication
  • 2x 100GbE ethernet interfaces
  • 28TB NVMe local storage

UF is the first university in the world to get to work with this technology. Visit the UF Artificial Intelligence Initiative website for more information.

The A100 technical specifications can be found at the NVIDIA A100 Website, in the DGX A100 User Guide, and at the NVIDIA Ampere developer blog.

For A100 benchmarking results, please see the HPCWire report.

The details of HiPerGator 3.0’s deployment can be found in the News section on our Homepage.