This is an overview of running interactive jobs on Aun, and Mio001. For running on Mc2 the concepts are the same but the procedure is a bit different. There is difference because the compute nodes on Aun and Mio run a normal operating system. This allows for some shortcuts. Mc2's compute nodes run a scaled down OS. In general, the procedures outlined for Mc2 will work on AuN and Mio but the reverse is not true.
The srun command is multipurpose.
When srun is executed inside of a batch script it launches parallel applications on the nodes that have been allocated to the job. The same is true for a parallel interactive session. Srun can be used to launch a collection of serial applications or a parallel MPI program.
However, srun can also be used to request nodes for interactive use. If you use the syntax:
srun -N 2 --ntasks-per-node=8 --pty bash
this requests 2 nodes (-N 2) and we are saying we are going to launch a maximum of 8 tasks per node (--ntasks-per-node=8). We are saying that you want to run a login shell (bash) on the compute nodes. The option --pty is important. This gives a login prompt and a session that looks very much like a normal interactive session but it is on one of the compute nodes. If you forget the --pty you will not get a login prompt and every command you enter will get run 16 = (-N 2) x (--ntasks-per-node=8) times.
Note: you can add any "normal" options to the srun line, like -p for partition or -t for runtime. For example to run on the GPU nodes you would enter:
srun -N 2 -p gpu --ntasks-per-node=8 --pty bash
After you enter the srun command you will be put into the normal queue waiting for nodes to become available. When they do you will get an interactive session on a compute node and you are put into the directory from which you ran the launched the session. You can then run commands. Note: the compute nodes have a subset of the commands available on the head node. Also, the environment you get on compute node is determined by a combination of three things: