TURBOMOLE is an ab initio computational chemistry program that implements various quantum chemistry algorithms. It is focused on efficiency, notably using the resolution of the identity (RI) approximation.
Availability and Restrictions
Versions
These versions are currently available (S means serial executables, O means OpenMP executables, and P means parallel MPI executables):
Version | Owens | Pitzer |
---|---|---|
7.1 | SOP | |
7.2.1 | SOP* | |
7.3 | SOP* |
You can use module spider turbomole
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access for Academic Users
Use of Turbomole for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Publisher/Vendor/Repository and License Type
COSMOlogic, Commercial
Usage
Usage on Owens and Pitzer
Set-up on Owens
To load the default version of Turbomole module on Owens, usemodule load turbomole
for both serial and parallel programs. To select a particular software version, use module load turbomole/version
. For example, use module load turbomole/7.1
to load Turbomole version 7.1 for both serial and parallel programs on Owens.
Using Turbomole on Owens
To execute a turbomole program:
module load turbomole <turbomole command>
Batch Usage on Owens
When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive Batch Session
For an interactive batch session one can run the following command:
qsub -I -l nodes=1:ppn=28 -l walltime=00:20:00
which requests 28 cores (-l nodes=1:ppn=28
), for a walltime of 20 minutes (-l walltime=00:20:00
). You may adjust the numbers per your need.
Sample batch scripts and input files are available here:
~srb/workshops/compchem/turbomole/
Note for Slurm job script
Upon Slurm migration, the presets for parallel jobs are not compatiable with Slurm environment of Pitzer. Users must set up parallel environment explicitly to get correct TURBOMOLE binaries.
To set up a MPI
case, add the following to a job script:
export PARA_ARCH=MPI export PATH=$TURBODIR/bin/`sysname`:$PATH
An example script:
#!/bin/bash #SBATCH --job-name="turbomole_mpi_job" #SBATCH --nodes=2 #SBATCH --time=0:10:0 module load intel module load turbomole/7.3 export PARA_ARCH=MPI export PATH=$TURBODIR/bin/`sysname`:$PATH export PARNODES=$SLURM_NTASKS dscf
To set up a SMP
(OpenMP) case, add the following to a job script:
export PARA_ARCH=SMP export PATH=$TURBODIR/bin/`sysname`:$PATH
An example script to run a SMP job on an exclusive node:
#!/bin/bash #SBATCH --job-name="turbomole_smp_job" #SBATCH --nodes=1 #SBATCH --exclusive #SBATCH --time=0:10:0 module load intel module load turbomole/7.3 export PARA_ARCH=SMP export PATH=$TURBODIR/bin/`sysname`:$PATH export OMP_NUM_THREADS=$SLURM_CPUS_ON_NODE dscf