Our user guides can be found at
At a minimum it is recommended that you look at the following links off of this page
These guides will explain the process of logging into our HPC platforms and show how to build and run a parallel "Hello World" example.
Our HPC platforms (Mio, BlueM, AuN, and Mc2) are Linux based machines. You need to be familiar with how to "get around" on a Linux platform. That is, you are expected to have a comprehension of how to work in linux environment. The page off of our user's guide:
has a number of links to tutorials. In particular we have a local tutorial:
All of our HPC platforms run the same scheduling software, slurm. Slurm has three important concepts: exclusivity, partitions, and accounts.
If you are running less than N MPI tasks per node where N is the number of cores on the node you should add the --exclusive option in your run script or on the sbatch command line. This will prevent multiple jobs from running on the same node.
All nodes on AuN and Mc2 have 16 cores.
The number of cores on Mio nodes can be seen by running the command:
/opt/utility/slurmnodes | egrep "NodeAddr|CPUTot"
Partitions are a collection of nodes.
Mio has a number of partitions, with nodes owned by a particular group belonging to the group's partition. Also, most of the nodes on Mio belong to the default partition "compute".
AuN has two partitions, "aun" which is the default partition, and "debug" The debug partition allows for quicker turn around for short jobs.
Mc2 only has a single partition and thus it is not important on that machine.
You can see how to run in particular partitions and select particular nodes on the page:
As a quick reference the command
will show which partitions you can use.
will show which nodes are currently in use.
Accounts are only important on AuN and Mc2. On these machines every job must be associated with an account. You can see which accounts you are allowed to use by running the command:
The account number must be specified when you run a job as discussed in the