The Intel compilers for both C/C++ and FORTRAN.
Availability and Restrictions
Versions
The versions currently available at OSC are:
Version | Owens | Pitzer | Ascend | Cardinal | Notes |
---|---|---|---|---|---|
16.0.3 | X | ||||
16.0.8 | X | Security update | |||
17.0.2 | X | ||||
17.0.5 | X | ||||
17.0.7 | X | X | Security update | ||
18.0.0 | X | ||||
18.0.2 | X | ||||
18.0.3 | X | X | |||
18.0.4 | X | ||||
19.0.3 | X | X | |||
19.0.5 | X* | X* | |||
19.1.3 | X | X | |||
2021.3.0 | X | X | oneAPI compiler/library | ||
2021.4.0 | X* | oneAPI compiler/library | |||
2021.5.0 | X | X | oneAPI compiler/library | ||
2021.10.0 | X |
You can use module spider intel
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access
The Intel Compilers are available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
Intel, Commercial (state-wide)
Usage
Usage on Owens
Set-up on Owens
After you ssh to Owens, the default version of Intel compilers will be loaded for you automatically.
Using the Intel Compilers
Once the intel compiler module has been loaded, the compilers are available for your use. See our compilation guide for suggestions on how to compile your software on our systems. The following table lists common compiler options available in all languages.
COMPILER OPTION | PURPOSE | ||
---|---|---|---|
-c |
Compile only; do not link | ||
-DMACRO[=value] |
Defines preprocessor macro MACRO with optional value (default value is 1) | ||
-g |
Enables debugging; disables optimization | ||
-I/directory/name |
Add /directory/name to the list of directories to be searched for #include files | ||
-L/directory/name |
Adds /directory/name to the list of directories to be searched for library files | ||
-lname |
Adds the library libname.a or libname.so to the list of libraries to be linked | ||
-o outfile |
Names the resulting executable outfile instead of a.out | ||
-UMACRO |
Removes definition of MACRO from preprocessor | ||
-v |
Emit version including gcc compatibility; see below | ||
Optimization Options | |||
-O0 |
Disable optimization | ||
-O1 |
Light optimization | ||
-O2 |
Heavy optimization (default) | ||
-O3 |
Aggressive optimization; may change numerical results | ||
-ipo |
Inline function expansion for calls to procedures defined in separate files | ||
-funroll-loops |
Loop unrolling | ||
-parallel |
Automatic parallelization | ||
-openmp |
Enables translation of OpenMP directives |
The following table lists some options specific to C/C++
-strict-ansi |
Enforces strict ANSI C/C++ compliance |
-ansi |
Enforces loose ANSI C/C++ compliance |
-std=val |
Conform to a specific language standard |
The following table lists some options specific to Fortran
-convert big_endian |
Use unformatted I/O compatible with Sun and SGI systems |
-convert cray |
Use unformatted I/O compatible with Cray systems |
-i8 |
Makes 8-byte INTEGERs the default |
-module /dir/name |
Adds /dir/name to the list of directories searched for Fortran 90 modules |
-r8 |
Makes 8-byte REALs the default |
-fp-model strict |
Disables optimizations that can change the results of floating point calculations |
Intel compilers use the GNU tools on the clusters: header files, libraries, and linker. This is called the Intel and GNU compatibility and interoperability. Use the Intel compiler option -v
to see the gcc version that is currently specified. Most users will not have to change this. However, the gcc version can be controlled by users in several ways.
On OSC clusters the default mechanism of control is based on modules. The most noticeable aspect of interoperability is that some parts of some C++ standards are available by default in various versions of the Intel compilers; other parts require you to load an extra module. The C++ standard can be specified with the Intel compiler option -std=val
; see the compiler man page for valid values of val. If you specify a particular standard then load the corresponding module; the most common Intel compiler version and C++ standard combinations, that are applicable to this cluster, are described below:
For the C++14 standard with an Intel 16 compiler:
module load cxx14
With an Intel 17 or 18 compiler, module cxx17
will be automatically loaded by the intel
module load command to enable the GNU tools necessary for the C++17 standard. With an Intel 19 compiler, module gcc-compatibility
will be automatically loaded by the intel
module load command to enable the GNU tools necessary for the C++17 standard. (In early 2020 OSC changed the name of these GNU tool controlling modules to clarify their purpose and because our underlying implementation changed.)
A symptom of broken gcc-compatibility is unusual or non sequitur compiler errors typically involving the C++ standard library especially with respect to template instantiation, for example:
error: more than one instance of overloaded function "std::to_string" matches the argument list: detected during: instantiation of "..." error: class "std::vector<std::pair<short, short>, std::allocator<std::pair <short, short>>>" has no member "..." detected during: instantiation of "..."
An alternative way to control compatibility and interoperability is with Intel compiler options; see the "GNU gcc Interoperability" sections of the various Intel compiler man pages for details.
Batch Usage on Owens
When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Interactive Batch Session
For an interactive batch session on Owens, one can run the following command:sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00which gives you 1 node with 28 cores (
-N 1 -n 28
) with 1 hour ( -t 1:00:00
). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file namedhello.c
and the output file named hello_results
. Below is the example batch script ( job.txt
) for a serial run:
#!/bin/bash #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=28 #SBATCH --job-name jobname #SBATCH --account=<project-account> module load intel cp hello.c $TMPDIR cd $TMPDIR icc -O2 hello.c -o hello ./hello > hello_results cp hello_results $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the following command:
sbatch job.txt
Non-interactive Batch Job (Parallel Run)
Below is the example batch script (job.txt
) for a parallel run:
#!/bin/bash #SBATCH --time=1:00:00 #SBATCH--nodes=2 --ntasks-per-node=40 #SBATCH --job-name name #SBATCH --account=<project-account> module load intel mpicc -O2 hello.c -o hello cp hello $TMPDIR cd $TMPDIR mpiexec ./hello > hello_results cp hello_results $SLURM_SUBMIT_DIR
Usage on Pitzer
Set-up on Pitzer
After you ssh to Pitzer, the default version of Intel compilers will be loaded for you automatically.
Using the Intel Compilers
Once the intel compiler module has been loaded, the compilers are available for your use. See our compilation guide for suggestions on how to compile your software on our systems. The following table lists common compiler options available in all languages.
COMPILER OPTION | PURPOSE | ||
---|---|---|---|
-c |
Compile only; do not link | ||
-DMACRO[=value] |
Defines preprocessor macro MACRO with optional value (default value is 1) | ||
-g |
Enables debugging; disables optimization | ||
-I/directory/name |
Add /directory/name to the list of directories to be searched for #include files | ||
-L/directory/name |
Adds /directory/name to the list of directories to be searched for library files | ||
-lname |
Adds the library libname.a or libname.so to the list of libraries to be linked | ||
-o outfile |
Names the resulting executable outfile instead of a.out | ||
-UMACRO |
Removes definition of MACRO from preprocessor | ||
-v |
Emit version including gcc compatibility; see below | ||
Optimization Options | |||
-O0 |
Disable optimization | ||
-O1 |
Light optimization | ||
-O2 |
Heavy optimization (default) | ||
-O3 |
Aggressive optimization; may change numerical results | ||
-ipo |
Inline function expansion for calls to procedures defined in separate files | ||
-funroll-loops |
Loop unrolling | ||
-parallel |
Automatic parallelization | ||
-openmp |
Enables translation of OpenMP directives |
The following table lists some options specific to C/C++
-strict-ansi |
Enforces strict ANSI C/C++ compliance |
-ansi |
Enforces loose ANSI C/C++ compliance |
-std=val |
Conform to a specific language standard |
The following table lists some options specific to Fortran
-convert big_endian |
Use unformatted I/O compatible with Sun and SGI systems |
-convert cray |
Use unformatted I/O compatible with Cray systems |
-i8 |
Makes 8-byte INTEGERs the default |
-module /dir/name |
Adds /dir/name to the list of directories searched for Fortran 90 modules |
-r8 |
Makes 8-byte REALs the default |
-fp-model strict |
Disables optimizations that can change the results of floating point calculations |
Intel compilers use the GNU tools on the clusters: header files, libraries, and linker. This is called the Intel and GNU compatibility and interoperability. Use the Intel compiler option -v
to see the gcc version that is currently specified. Most users will not have to change this. However, the gcc version can be controlled by users in several ways.
On OSC clusters the default mechanism of control is based on modules. The most noticeable aspect of interoperability is that some parts of some C++ standards are available by default in various versions of the Intel compilers; other parts require an extra module. The C++ standard can be specified with the Intel compiler option -std=val
; see the compiler man page for valid values of val.
With an Intel 17 or 18 compiler, module cxx17
will be automatically loaded by the intel
module load command to enable the GNU tools necessary for the C++17 standard. With an Intel 19 compiler, module gcc-compatibility
will be automatically loaded by the intel
module load command to enable the GNU tools necessary for the C++17 standard. (In early 2020 OSC changed the name of these GNU tool controlling modules to clarify their purpose and because our underlying implementation changed.)
A symptom of broken gcc-compatibility is unusual or non sequitur compiler errors typically involving the C++ standard library especially with respect to template instantiation, for example:
error: more than one instance of overloaded function "std::to_string" matches the argument list: detected during: instantiation of "..." error: class "std::vector<std::pair<short, short>, std::allocator<std::pair <short, short>>>" has no member "..." detected during: instantiation of "..."
An alternative way to control compatibility and interoperability is with Intel compiler options; see the "GNU gcc Interoperability" sections of the various Intel compiler man pages for details.
C++ Standard | GNU | Intel |
---|---|---|
C++11 | > 4.8.1 | > 14.0 |
C++14 | > 6.1 | > 17.0 |
C++17 | > 7 | > 19.0 |
C++2a | features available since 8 |
Batch Usage on Pitzer
When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Interactive Batch Session
For an interactive batch session on Pitzer, one can run the following command:
sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
which gives you 1 node (-N 1
), 40 cores ( -n 40
), and 1 hour ( -t 1:00:00
). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named hello.c
and the output file named hello_results
. Below is the example batch script ( job.txt
) for a serial run:
#!/bin/bash #SBATCH --time=1:00:00 #SBATCH --nodes=1 --ntasks-per-node=40 #SBATCH --job-name hello #SBATCH --account=<project-account> module load intel cp hello.c $TMPDIR cd $TMPDIR icc -O2 hello.c -o hello ./hello > hello_results cp hello_results $SLURM_SUBMIT_DIR
In order to run it via the batch system, submit the job.txt
file with the following command:
sbatch job.txt
Non-interactive Batch Job (Parallel Run)
Below is the example batch script ( job.txt
) for a parallel run:
#!/bin/bash #SBATCH --time=1:00:00 #SBATCH --nodes=2 --ntasks-per-node=40 #SBATCH --job-name name #SBATCH --account=<project-account> module load intel module laod intelmpi mpicc -O2 hello.c -o hello cp hello $TMPDIR cd $TMPDIR sun ./hello > hello_results cp hello_results $SLURM_SUBMIT_DIR