![]() |
HiPerGator Hardware Specification SheetView the most recent version of the HiPerGator hardware spec sheet |
HiPerGator 2.0
|

To put these numbers in context, consider the following comparisons:
- The 1,200-teraflop speed of HiPerGator is the same as the combined speed of 600 PlayStation 4 and 840 Xbox One gaming stations, if they could be made to work together effectively and efficiently on large data and complex problems.
- The human brain is estimated to have the power equivalent of 20 petaflops.
- The 184 terabytes of RAM (not the 3 petabytes of long-term disk storage) of HiPerGator can hold 360 million books.
Nodes and processors
General Compute nodes
Year | Node Model | Family | Chip Model | Nodes | Sockets | Cores | RAM (GB) |
2018 | Dell C6420 | Skylake | Xeon Gold 6142 | 32 | 2 | 32 | 192 |
2015 | Dell C6320 | Haswell | E5-2698 | 900 | 2 | 32 | 128 |
2013 | Dell C6145 | Abu-Dhabi | Opteron 6378 | 128 | 4 | 64 | 256 |
2013 | Penguin Altus 2840 | Interlagos | Opteron 6220 | 128 | 2 | 16 | 64 |
Nvidia gpu nodes
Year | Node Model | Family | Chip Model | Nodes | GPUS/Node | RAM/GPU (GB) |
2019 | Exxact | GeForce | RTX 2080TI | 72 | 8 | 12 |
2019 | Exxact | Quadro | RTX 6000* | 6 | 8 | 24 |
2018 | AdvancedHPC | GeForce | GTX 1080TI | 2 | 8 | 12 |
2015 | Dell R730 | Tesla | K80 | 32 | 4 | 12 |
* The Nvidia Quadro RTX 6000 GPU cards are connected in pairs using NVlink
Network and node interconnect
The nodes are connected by Mellanox 56Gbit/s FDR InfiniBand interconnect for fast data access to storage and for distributed memory parallel processing. The core switches use the 100 Gbit/s EDR InfiniBand standard. There is a 10 Gbit/s Ethernet for management and for providing the operating system to the diskless nodes, and a separate 10 Gbit/s Ethernet network connecting the nodes to support the planned new services of running virtual machines and virtual cluster on HiPerGator.
Storage
The storage options for HiPerGator are described on the Storage Types policy page. The current total size of HiPerGator storage is 2 petabytes (PB). The default path for blue storage is /ufrc. The default path for orange storage is /orange.
Electrical power and performance measures
Some researchers are interested in monitoring electrical power consumption of their computations. The SLURM scheduler generates a power summary at the of a job; by using SLURM directives you can control many details of your job, such as whether all tasks run on a single node and whether the node has the Intel processors or AMD processors. However, because a HiPerGator is a production system, we do not allow users to change the frequency of processors, or to do other research that would involve full dedication of hardware and root access.