HiPerGator Hardware Specification Sheet
To put these numbers in context, consider the following comparisons:
- The 1,100-teraflop speed of HiPerGator 2.0 is the same as the combined speed of 600 PlayStation 4 and 840 Xbox One gaming stations, if they could be made to work together effectively and efficiently on large data and complex problems.
- The human brain is estimated to have the power equivalent of 20 petaflops.
- The 120 terabytes of RAM (not the 1 petabyte of long-term disk storage) of HiPerGator 2.0 can hold 240 million books.
Nodes and processors
|Year||Family||Node Model||Chip Model||Nodes||Sockets||Cores||RAM (GB)|
Network and node interconnect
The nodes are connected by Mellanox 56Gbit/s FDR InfiniBand interconnect for fast data access to storage and for distributed memory parallel processing. The core switches use the 100 Gbit/s EDR InfiniBand standard. There is a 10 Gbit/s Ethernet for management and for providing the operating system to the diskless nodes, and a separate 10 Gbit/s Ethernet network connecting the nodes to support the planned new services of running virtual machines and virtual cluster on HiPerGator.
The storage options for HiPerGator are described on the Storage Types policy page. The current total size of HiPerGator storage is 2 petabytes (PB). The default path for blue storage is /ufrc. The default path for orange storage is /orange.
Electrical power and performance measures
Some researchers are interested in monitoring electrical power consumption of their computations. The SLURM scheduler generates a power summary at the of a job; by using SLURM directives you can control many details of your job, such as whether all tasks run on a single node and whether the node has the Intel processors or AMD processors. However, because a HiPerGator is a production system, we do not allow users to change the frequency of processors, or to do other research that would involve full dedication of hardware and root access.