Storage provided by UFIT Research Computing (UFRC) is only for research and educational data, code, and documents for use on HiPerGator and with HiPerGator services. UFRC creates, modifies, and enforces quotas based on group allocations. In addition, UFRC reserves the right to delete, move, or otherwise make data unavailable on any storage system as deemed necessary by UFRC personnel to maintain the overall quality of service.
While UFRC makes every effort to maintain the availability and integrity of our storage systems, the storage systems are not backed up by default. Users are responsible for purchasing backup services or setting up backups of their data.
Each HiPerGator user is provided 40GB of home directory storage. The home area is intended for source code, scripts, and project documents and cannot be used for Input/Output (I/O, i.e. Reading or Writing) from jobs and programs. Limited file recovery is available via daily snapshots of home areas for one week and weekly snapshots for three additional weeks.
Blue is the main shared storage. Jobs and programs are expected to perform their I/O and write their outputs to Blue storage. An investor may request a free Blue storage quota increase for up to three months once per year. Additional ‘burst’ storage with a recurring 30-day grace period may be allocated based on project need.
Orange storage is intended to be used for long-term retention of data that is not actively involved in job computations and for light-duty services like static data serving. For more information about the static data services, please submit a service request at https://support.rc.ufl.edu
An active Blue allocation is required to acquire an Orange allocation.
Red is high-performance shared storage intended for short-term storage in jobs for projects that require the highest I/O performance. Red storage allocations cannot be purchased, only assigned based on need upon Director approval. Data is removed 24 hours after allocation expiration.
Local Scratch Storage
Local job scratch storage on compute nodes is configured automatically for each job. There is no quota, and the local scratch data for the job is removed when the job ends. UFRC will make a reasonable effort to manage free space on local scratch storage, but user processes may fill up local storage. UFRC is not responsible for job failures resulting from full local scratch.