HiPerGator 2.0

HiPerGator Hardware Specification Sheet

View the most recent version of the HiPerGator hardware spec sheet
HiPerGator 2.0
  • 46,000 cores
  • 184 terabytes of RAM
  • 3 petabytes of disk storage
  • Maximum speed of ~1,200 teraflops = 1,200 trillion floating point operations

To put these numbers in context, consider the following comparisons:

  • The 1,200-teraflop speed of HiPerGator is the same as the combined speed of 600 PlayStation 4 and 840 Xbox One gaming stations, if they could be made to work together effectively and efficiently on large data and complex problems.
  • The human brain is estimated to have the power equivalent of 20 petaflops.
  • The 184 terabytes of RAM (not the 3 petabytes of long-term disk storage) of HiPerGator can hold 360 million books.

Nodes and processors

General Compute nodes

YearNode ModelFamilyChip ModelNodesSocketsCoresRAM (GB)
2018Dell C6420SkylakeXeon Gold 614232232192
2015Dell C6320HaswellE5-2698900232128
2013Dell C6145Abu-DhabiOpteron 6378128464256
2013Penguin Altus 2840InterlagosOpteron 622012821664

Nvidia gpu nodes

YearNode ModelFamilyChip ModelNodesGPUS/NodeRAM/GPU (GB)
2019ExxactGeForceRTX 2080TI72812
2019ExxactQuadroRTX 6000*6824
2018AdvancedHPCGeForceGTX 1080TI2812
2015Dell R730TeslaK8032412

* The Nvidia Quadro RTX 6000 GPU cards are connected in pairs using NVlink

Network and node interconnect

The nodes are connected by Mellanox 56Gbit/s FDR InfiniBand interconnect for fast data access to storage and for distributed memory parallel processing. The core switches use the 100 Gbit/s EDR InfiniBand standard. There is a 10 Gbit/s Ethernet for management and for providing the operating system to the diskless nodes, and a separate 10 Gbit/s Ethernet network connecting the nodes to support the planned new services of running virtual machines and virtual cluster on HiPerGator.

Storage

The storage options for HiPerGator are described on the Storage Types policy page. The current total size of HiPerGator storage is 2 petabytes (PB). The default path for blue storage is /ufrc. The default path for orange storage is /orange.

Electrical power and performance measures

Some researchers are interested in monitoring electrical power consumption of their computations. The SLURM scheduler generates a power summary at the of a job; by using SLURM directives you can control many details of your job, such as whether all tasks run on a single node and whether the node has the Intel processors or AMD processors. However, because a HiPerGator is a production system, we do not allow users to change the frequency of processors, or to do other research that would involve full dedication of hardware and root access.