How to Submit Interactive jobs
There are different ways to submit interactive jobs.
Using qsub
qsub
command is patched locally to handle the interactive jobs. So mostly you can use the qsub
command as before:
[xwang@pitzer-login04 ~]$ qsub -I -l nodes=1 -A PZS0712 salloc: Pending job allocation 15387 salloc: job 15387 queued and waiting for resources salloc: job 15387 has been allocated resources salloc: Granted job allocation 15387 salloc: Waiting for resource configuration salloc: Nodes p0601 are ready for job ... [xwang@p0601 ~]$ # can now start executing commands interactively
Using sinteractive
You can use the custom tool sinteractive
as:
[xwang@pitzer-login04 ~]$ sinteractive salloc: Pending job allocation 14269 salloc: job 14269 queued and waiting for resources salloc: job 14269 has been allocated resources salloc: Granted job allocation 14269 salloc: Waiting for resource configuration salloc: Nodes p0591 are ready for job ... ... [xwang@p0593 ~] $ # can now start executing commands interactively
Using salloc
It is a little complicated if you use salloc
. Below is a simple example:
[user@pitzer-login04 ~] $ salloc -t 00:30:00 --ntasks-per-node=3 srun --pty /bin/bash salloc: Pending job allocation 2337639 salloc: job 2337639 queued and waiting for resources salloc: job 2337639 has been allocated resources salloc: Granted job allocation 2337639 salloc: Waiting for resource configuration salloc: Nodes p0002 are ready for job # normal login display [user@p0002 ~]$ # can now start executing commands interactively
How to Submit Non-interactive jobs
Submit PBS job Script
Since we have the compatibility layer installed, your current PBS scripts may still work as they are, so you should test them and see if they submit and run successfully. Submit your PBS batch script as you did before to see whether it works or not. Below is a simple PBS job script pbs_job.txt
that calls for a parallel run:
#PBS -l walltime=1:00:00 #PBS -l nodes=2:ppn=40 #PBS -N hello #PBS -A PZS0712 cd $PBS_O_WORKDIR module load intel mpicc -O2 hello.c -o hello mpiexec ./hello > hello_results
Submit this script on Pitzer using the command qsub pbs_job.txt
, and this job is scheduled successfully as shown below:
[xwang@pitzer-login04 slurm]$ qsub pbs_job.txt 14177
Check the Job
You can use the jobscript
command to check the job information:
[xwang@pitzer-login04 slurm]$ jobscript 14177 -------------------- BEGIN jobid=14177 -------------------- #!/bin/bash #PBS -l walltime=1:00:00 #PBS -l nodes=2:ppn=40 #PBS -N hello #PBS -A PZS0712 cd $PBS_O_WORKDIR module load intel mpicc -O2 hello.c -o hello mpiexec ./hello > hello_results -------------------- END jobid=14177 --------------------
#!/bin/bash
added at the beginning of the job script from the output. This line is added by Slurm's qsub compatibility script because Slurm job scripts must have #!<SHELL>
as its first line.You will get this message explicitly if you submit the script using the command sbatch pbs_job.txt
[xwang@pitzer-login04 slurm]$ sbatch pbs_job.txt sbatch: WARNING: Job script lacks first line beginning with #! shell. Injecting '#!/bin/bash' as first line of job script. Submitted batch job 14180
Alternative Way: Convert PBS Script to Slurm Script
An alternative way is that we convert the PBS job script (pbs_job.txt
) to Slurm script (slurm_job.txt
) before submitting the job. The table below shows the comparisons between the two scripts (see this page for more information on the job submission options):
Explanations | Torque | Slurm |
---|---|---|
Line that specifies the shell | No need |
#!/bin/bash |
Resource specification
|
#PBS -l walltime=1:00:00 #PBS -l nodes=2:ppn=40 #PBS -N hello #PBS -A PZS0712 |
#SBATCH --time=1:00:00 #SBATCH --nodes=2 --ntasks-per-node=40 #SBATCH --job-name=hello #SBATCH --account=PZS0712 |
Variables, paths, and modules |
cd $PBS_O_WORKDIR module load intel |
cd $SLURM_SUBMIT_DIR module load intel |
Launch and run application |
mpicc -O2 hello.c -o hello mpiexec ./hello > hello_results |
mpicc -O2 hello.c -o hello srun ./hello > hello_results |
cd $SLURM_SUBMIT_DIR
can be omitted in the Slurm script because your Slurm job always starts in your submission directory, which is different from Torque/Moab environment where your job always starts in your home directory.Once the script is ready, you submit the script using the command sbatch slurm_job.txt
[xwang@pitzer-login04 slurm]$ sbatch slurm_job.txt
Submitted batch job 14215