Frequently Asked Questions


How do I obtain an account?
Fill out the Account Request form. If you are not a member of the University of Florida or do not have a GatorLink account, ask your faculty sponsor to submit an account request for you. Requests submitted by non-faculty members must be confirmed by faculty sponsors prior to account creation. Accounts are generally created within twenty-four hours of receiving a request, but may take longer if the faculty sponsor does not reply promptly to a verification request.
How do I change my password?
If you know your current password, visit the change password page or use the passwd command on the Linux/Unix command line. If you do not know your current password, visit the reset password page.
My password has expired. What do I do?
Account passwords expire every six months. You can change your password before it expires or reset your password after it expires. In the password reset process, we will send a temporary password to your email address of record. You must then use that password to set your permanent password.
How do I get help?
If you encounter problems or have questions, please open a support ticket. Support tickets provide a traceable, permanent record of your issue and are systematically reviewed on a daily basis to ensure they are addressed as quickly as possible. You may also contact us by email or visit our offices in person.  
Back to Top


What types of storage are available and how should each type be used?
Please see our Storage Services page.
Can my scratch storage quota be increased?
You can request a temporary quota increase. Submit a support request and indicate: (1) how much additional space you need; (2) the file system on which you need it; and (3) how long you will need it. Additional space is granted at the discretion of Research Computing on an “as available” basis for short periods of time. If you need more space on a long-term basis, please review our storage options and contact us to discuss an appropriate solution for your needs.  
How can I check my scratch storage quota?
Please see our how-to guide for this topic.
Why can't I run jobs in my home directory?
Home directories are intended for relatively small amounts of human-readable data such as text files, PDFs, Word documents, shell scripts, and source code.  Neither the servers nor the file systems on which the home directories reside can sustain the load associated with a large cluster. Overall system response will be severely impacted if they are subjected to such a load. This is by design and is the reason all job I/O must take place on the scratch or, to a lesser extent, the project file systems.
Back to Top

Available Software

What applications are available?
The full list of applications installed on the cluster is available at the Installed Software wiki page.
May I submit an installation request for an application?

Yes, if the software you need is not listed on our Installed Software page, you may submit a support request to have it installed by Research Computing staff. Please observe the following guidelines:

  1. Provide a link to the web site from which to download the software
  2. If there are multiple versions, be specific about the version you want
  3. Let us know if you require any options that are not a standard part of the application

You may also install applications yourself in your home directory.

Please only ask us to install applications that you know will meet your needs and that you intend to use extensively. We do not have the resources to build applications for testing and evaluation purposes.

Why do I get the "command not found" error message?

The Linux command interpreter (shell) maintains a list of directories in which to look for commands that are entered on the command line. This list is maintained in the PATH environment variable. If the full path to the command is not specified, the shell will search the list of directories in the PATH environment variable and if a match is not found, you will get the "command not found" message. A similar mechanism exists for dynamically linked libraries using the LD_LIBRARY_PATH environment variable.

To ease the burden of setting and resetting environment variables for different applications, we have installed a "modules" system. Each application has an associated module which, when loaded, will set or reset whatever environment variables are required to run that application - including the PATH and LD_LIBRARY_PATH variables.

The easiest way to avoid "command not found" messages is to ensure that you have loaded the module for your application. See Modules for more information.

Back to Top

Job Management

What is a batch system?
A batch system is a class of computer software whose primary purpose is to collect computational tasks into one or more queues, and to schedule those tasks on available compute resources.
What is the difference between a batch job and an interactive job?
batch job is submitted to the batch system via a job script passed to the qsub command. Once queued, a batch job will run on resources chosen by the scheduler. When a batch job runs, a user cannot interact with it. An interactive job is any process that is run at the command line prompt. Interactive processes can be run on the test nodes or in a simulated interactive environment obtained by running the qsub -I command.
How do I submit a job to the batch system?
The primary job submission mechanism is via the qsub command via the Linux command line interface
$ qsub my_job_script
where my_job_script is a file containing the commands that the batch system will execute on your behalf. Jobs may also be submitted to the batch system through the Galaxy web interface as well as the Open Science Grid's Globus interface.
Why do I get a "Unauthorized Request MSG=group ACL is not satisfied" error when I submit a job?
Some of the queues have group-based access control lists (ACLs) enabled. If you submit a job and the system returns an error that looks like "qsub: Unauthorized Request MSG=group ACL is not satisfied: user user@submit.ufhpc, queue queuename", it means that the group you are in does not satisfy the ACL for the queue in question. If you would like to request access to a particular queue, please open a support request. In your request, be sure to include your group name and the name of the queue you would like to access.
How do I check the status of my jobs?
You can use
qstat -u username
showq -u username
to see the status of all of your jobs. Both commands have other options you may find useful. See the qstat and showq manual pages for more information.
Why isn't my job running?
You can use the checkjob command to see why your job is not running. To use it, type checkjob followed by your job ID. Near the bottom of the checkjob output, there are two lines of particular interest: a NOTE field followed by a reason your job is idling, or a BLOCK MSG field followed by the reason your job is blocked. Please see the checkjob man page for examples and further explanations of the reasons that follow the NOTE field.
What is the difference between eligible jobs and blocked jobs?
An "eligible" job is eligible to run and the scheduler is simply waiting for the resources to become available in order to schedule the job. Jobs are scheduled according to priority and a job's priority is determined by a number of factors. A "blocked" job is not being considered for scheduling. This is usually because the user or group associated with the job has exceeded some resource limit such as the maximum number of running jobs or PEs at a given time. As the user's or group's jobs end, blocked jobs will transition from blocked to eligible. This is a normal function of the scheduler which ensures that no single user or group exceeds their share of the available resources.
How do I delete a job from the batch system?
You can use the qdel command to delete jobs from the queue. You can only delete jobs that you submitted.
Why did my job die immediately with the message "/bin/bash: bad interpreter: No such file or directory"?
This is typically caused by hidden characters in your job script that the command interpreter does not understand. If you created your script on a Windows machine and copied it to the cluster, you should run
$ dos2unix my_script
where my_script is your job script. This will remove any characters not recognized by Linux command interpreters from the text file.
What are the wall time limits for each queue?
Queue Default Wall Time Maximum Wall Time
investor 12:00:00 744:00:00
other 06:00:00 168:00:00
testq 00:10:00 00:30:00
parallel 01:00:00 48:00:00
merzberg 00:10:00 744:00:00
bio 12:00:00 744:00:00
bigmem 12:00:00 336:00:00
gpu 00:10:00 744:00:00
How can I view the output of a running job?
By default, every process under Linux/Unix has three file descriptors or "handles" associated with it.  These are referred to as the running program's standard input, output, and error descriptors or, stdin, stdout, and stderr.  When working interactively, these file descriptors are generally connected to your input device (i.e. keyboard) for stdin and your output device (i.e. terminal display) for stdin and stderr. The command shell provides a mechanism by which you can redirect the stdin, stdout, and stderr of a program from a terminal device or display to/from a file on the file system. The batch system takes advantage of this mechanism to redirect the stdout and stderr of your job script to files on the host on which the job runs.   When the job is finished, the batch system copies these files from the local host to your home directory or the file designated by the "#PBS -o <path>" and "#PBS -e <path>" directives where "-o" designates where the stdout file should be copied and "-e" designates where the stderr should be copied. The programs you run within your job script are "children" of the job script you submit.  As such they inherit the file descriptors of their parent (i.e. the job script).  This is a fancy way of saying that, unless you intervene, the output of the programs run within your script will be merged with the stdout and stderr of the script and be written files on the execution host.   Since users do not have access to the execution hosts, this makes it difficult to monitor the progress of your jobs. There is however, a solution.   You can take advantage of the shell's "redirection" feature and redirect the output of the individual programs in your job script to a file on the scratch file system.   When you do so, the output of the individual programs will be written to the specified file rather than merged with the output of the job script and will be available for you to view while your job is running. In the example below, the stdout (file descriptor 1) will be redirected to "log.out" and the stderr (file descriptor 2) will be redirected to "log.err".
my_app -in file1 -out file2 1> log.out 2> log.err
In this example, stdout and stderr will be merged into a single "stream" and redirected to the file "log.out".
my_app -in file1 -out file2 > log.out 2>&1
Back to Top


How do I set up my development environment?
The default development environment is that of the GNU Compiler Collection as distributed with RedHat Enterprise Linux. The Intel Compiler Suite is also available. Generally speaking, we use modules to mange our software environment including our PATH and LD_LIBRARY_PATH environment variables. To use any available software package that is not part of the default environment, including compilers, you must load the associated modules. For example, to use the Intel compilers and link against the fftw3 libraries you would first run
$ module load intel
$ module load fftw
which may be collapsed to the single command
$ module load intel fftw
What compilers are available?
We have two compiler suites, the GNU Compiler Collection (GCC) and the Intel Compiler Suite (Composer XE). The default environment provides access to the GNU Compiler collection while the Composer XE may be accessed by loading the intel module.
On what hosts may I develop and test software?
You should use the interactive test nodes for software development and testing. These nodes are kept consistent with the software environment on the computational servers so that you can be assured that if it works on a test machine, it will work via the batch system. The names for the interactive test nodes are dev1 and dev2. To develop on these nodes, you must log in to one of them from the log-in server as follows:
gator3$ ssh dev1
Note that the software environment on the log-in servers are also kept consistent with the computational nodes and include a full development environment. However, you are not allowed to run user applications on the log-in nodes. If you are doing interactive work of any kind, you must log in to one of the interactive test machines.
Back to Top


How do I run MATLAB programs?
You may use the interactive MATLAB interpreter on the test nodes. However, in order to run MATLAB programs through the batch system, you must compile your MATLAB source code into a standalone executable. This is required because there are not enough MATLAB licenses available to run the programs directly. To learn how to compile your MATLAB program please see our MATLAB wiki page.
How do I compile a MATLAB program?
Generally speaking, you will load the MATLAB module and then use the MATLAB compiler, mcc, to compile your MATLAB program(s). See our MATLAB wiki page for more detailed instructions.
Why can't I checkout a MATLAB compiler license?
If you tried to use the MATLAB compiler, mcc, and received the message "Could not check out a compiler license" it is because Research Computing does not have its own MATLAB licenses but relies on the UF campus license. There are a limited number of MATLAB compiler licenses shared by the whole campus. When the license is checked out during an interactive MATLAB session, it does not get checked back in until the MATLAB session is terminated, which could take a long time depending on what the user is doing. Unfortunately, you will not be able to run mcc until a license becomes available.
Back to Top


How can I add large datasets to the Galaxy?
Please see the step-by-step Galaxy large data upload wiki article.
How do I report a Galaxy problem?
See the relevant wiki article.
How do I find out what resources Galaxy tools are requesting?
See the Galaxy PBS Resource Limits wiki page. If you are working on an analysis that requires larger resource request(s) for certain tools please submit a support request to have the limits adjusted.
I think I have a Galaxy issue, but I'm not sure about it. What should I do?
You can always open a support request when you have questions even if you are not sure whether there is an issue. If you'd like you can check the list of known Galaxy issues that are already being worked on before searching for help.
I'd like to use a particular tool, but I can't find it in the Galaxy. What should I do?
Please submit a support request. The tool in question could already be wrapped by someone and available in the Galaxy Tool Shed. If it's in the Tool Shed we can usually make it available in the UF Galaxy instance almost immediately. If the tool is not available in the Galaxy Tool Shed, we can look at the tool to determine if we can "wrap" it into the Galaxy interface and what the timeline for the project may be.
Back to Top