Upgraded Login Nodes and Load Balancer Infrastructure

The UFRC team is pleased to announce the availability of significant upgrades to our load balancer infrastructure and login node service, which will go into production on the morning of Monday April 9. New ssh logins will automatically be routed to the new equipment, or you access the new login servers directly at hpg.rc.ufl.edu. Existing ssh logins to the current service will continue to function until Wednesday April 11, at which point the old equipment will be shut down.

The new load balancer replaces a unit that HiPerGator had outgrown. The network throughput (80x improvement) and processor capabilities are both increased dramatically. Both of these improvements address performance bottlenecks observed in the existing setup.

The new login node farm is a similarly dramatic upgrade over the current configuration both in size and capability. The number of login nodes is doubled in the new configuration, and each node has significant upgrades in CPU, memory, and network bandwidth – CPUs much more capable, core counts are doubled, RAM increased by a facter of four, and network connection is 10Gbps instead of just 1Gbps. The new login nodes are identical to the nodes in the hpg2-compute partition from a hardware perspective and run Red Hat Enterprise Linux version 7 just like the rest of the compute nodes.

We believe these upgrades will improve the interactive user experience on HiPerGator. Simple activities like compiling and downloading files from the Internet that had previously performed poorly or had been impractical on the existing login nodes should work fine on the new setup, and no longer force the use of resources like dtn1 and/or dev nodes for such tasks.