Skip to main content

C/C++ (Intel Compiler)

The NIG supercomputer system has installed the Intel oneAPI Base & HPC Toolkit Multi-node, enabling the use of the following tools. For details about each product, please refer to the following sites:

You may consider using Intel's compiler, libraries, and development environments over open-source compilers such as gcc in the following cases:

  • When you want to build and optimize open-source numerical libraries by yourself, especially if you want to build libraries written in FORTRAN.
  • When building programs that use multi-threading programming standards like OpenMP, especially for FORTRAN.
  • When you want to use Python modules like numpy, scikit-learn, and scipy, which are fast and use Intel MKL internally (Intel Distribution for Python).
  • When you want to develop programs using various mathematical functions supported by Intel MKL.
  • When you want to use libraries optimized for Intel hardware features and generate optimized code with detailed compiler options.
  • When you want to fine-tune or debug programs using OpenMP, MPI, etc., with tools like VTune Profiler and TraceAnalyzer.

The table below provides an overview of the components available on the NIG supercomputer.

Components Available on the NIG Supercomputer

Product NameOverview
Intel oneAPI DPC++/C++ CompilerC/C++ compiler with advanced optimization and speed-up options for Intel hardware.
Intel MPI LibraryIntel's MPI library that can be integrated with Intel's development tools.
Intel oneAPI DPC++ Library (oneDPL)Basic and template library for C++ (parallel compatible).
Intel oneMKLNumerical library with linear algebra, various mathematical functions, FFT, random number generation, etc., with a long history of use on Intel hardware for potential speed-ups.
Intel oneDALLibrary to accelerate big data analytics applications and distributed computing.
Intel IPPLibrary for image processing, signal processing, data compression, cryptography, etc. (more for IoT and embedded device processors).
Intel OneTBBC++ multi-threading library compatible with thread parallelism and IntelMPI.
Intel OneCCLHigh-performance communication library for distributed deep learning, usable with Horovod, etc.
Intel OneDNNLibrary for deep learning applications, optimized using Intel hardware features (e.g., AVX512).
Intel AdvisorTool for vectorization/threading prototyping and tuning for developers of C, C++, C#, Fortran software.
Intel VTune ProfilerPerformance analysis tool for advanced profiling, capable of debugging across multiple compute nodes.
Intel Distribution GDBGDB enhanced by Intel for debugging on Intel CPU, GPU, FPGA.
Intel Fortran CompilerFortran compiler by Intel capable of generating highly optimized code for Intel hardware.
Intel Distribution for PythonPython ecosystem accelerated by Intel, including MKL-compatible modules like numpy.
Intel InspectorDebugger for detecting memory/thread errors.
Intel Trace Analyzer & CollectorTool for performance analysis and tuning of MPI applications.

Furthermore, according to Intel's policy, Intel software development tools are available for free for both commercial and academic use (support is paid).

It is possible to install Intel's tools on your own computer for software creation and debugging, and then use the NIG supercomputer system for computations requiring large resources.

Below, we explain an overview and the basic usage of the components available on the NIG supercomputer. Please note that directory paths in execution logs, such as 2024.0, indicate the situation at the time of article creation and may change

with updates. Please interpret the actual situation accordingly.

Intel® oneAPI DPC++/C++ Compiler

The NIG supercomputer allows the use of the Intel OneAPI DPC++/C++ compiler.

The Intel compiler is available by default.

 yxxxx@at138:~$ which icx
/lustre7/software/intel_ubuntu/oneapi/compiler/2024.0/bin/icx
yxxxx@at138:~$ which icpx
/lustre7/software/intel_ubuntu/oneapi/compiler/2024.0/bin/icpx

To execute binaries with advanced optimizations, please run them on compute nodes equipped with Intel CPUs (they will also work on AMD CPUs). For the general division, please refer to the queue configuration.

Compiler Command Formats

LanguageCommandExecution Format
Cicxicx [options] filename
C++icpxicpx [options] filename

Main Options

Here are the summaries of the main options available with the Intel compiler:

Option NameDescription
-o FILENAMESpecifies the name of the object file.
-mcmodel=mediumAllows memory usage exceeding 2GB.
-shared-intelDynamically links all libraries provided by Intel.
-qopenmpCompiles with OpenMP directives enabled.
-qmklLinks MKL library.
-parallelEnables automatic parallelization.
-O0 / -O1 / -O2 / -O3Specifies the optimization level (default is -O2).
-fastOptimizes to maximize program execution speed. Includes options: -ipo, -O3, -static, -fp-model fast=2 by default.
-ipOptimizes procedures within a single file.
-ipoOptimizes procedures across multiple files. May significantly increase compilation time.
-xCORE-AVX512 / -xCORE-AVX2Generates optimized code for specified instruction sets for Intel processors.
-static-intelStatic links libraries provided by Intel.

The vendor recommends using the -fast option as a basic check.

For detailed options, refer to Intel's site:

Intel's detailed compiler options on the documentation site

Using OpenMP

OpenMP is available with the Intel compiler. For details about the OpenMP features supported by the Intel compiler, please refer to the information on Intel's site. As of 11/30/2023, the Intel Compiler installed on the system supports OpenMP 5.0 to 6.0 (partially).

OpenMP* Features and Extensions Supported in Intel® oneAPI DPC++/C++ Compiler

A Survey of OpenMP* Features Implemented in Intel® Fortran and C++ Compilers