Neuron guide
누리온 지침서뉴론 지침서활용정보MyKSC 지침서
  • Neuron guidelines
  • SYSTEM
    • System Overview and Configuration
    • User Environment
    • User Programming Environment
    • Running Jobs Through Scheduler (SLURM)
    • User Support
  • Software
    • Gaussian16 on GPU
  • APPENDIX
    • Main Keywords for Job Scripts
    • Conda
    • Singularity Container
    • Lustre stripe
    • Neuron Jupyter
    • How to Use Keras-Based Multi-GPU
    • How to Install Conda-based Horovod
    • Parallelizing Deep Learning Frameworks with Horovod
    • Using AI with Multiple Nodes
  • External link
    • Nurion Guide (Kor)
    • Neuron Guide(Kor)
Powered by GitBook
On this page
  • A. Programming Tool Installation Status
  • B. How to Use the Compilers
  • 1. Compiler and MPI environment configuration (modules)
  • 2. Compiling sequential programs
  • 3. Compiling parallel programs
  1. SYSTEM

User Programming Environment

PreviousUser EnvironmentNextRunning Jobs Through Scheduler (SLURM)

Last updated 6 months ago

A. Programming Tool Installation Status

The conda/pytorch_1.11.0 and conda/tensorflow_2.4.1 modules are pre-configured with the necessary libraries for using PyTorch and TensorFlow. To use the conda command, you must first load the python/3.7.1 module.

It is recommended to use the Anaconda environment for AI frameworks on the Neuron system (please verify the licensing conditions).

Users can build and run Singularity container images based on their requirements to execute their programs.

  • Refer to [Appendix 3] ‘How to Use Singularity container’

※ Non-CUDA MPI libraries must be employed to use nodes that are not equipped with GPUs.

B. How to Use the Compilers

1. Compiler and MPI environment configuration (modules)

1) Module-related basic commands

※ For user convenience, the "module" command can be abbreviated as the "ml" command.

Print a list of available modules

You can check a list of available modules, such as compilers and libraries.

$ module avail
or
$ module av

Add a module to be used

You can add modules that you plan to use, such as compilers and libraries.

Modules that will be used can be added simultaneously.

$ module unload [module name] [module name] [module name] ...
or
$ module rm [module name] [module name] [module name] ...
or
$ ml -[module name] -[module name] -[module name] ...     
ex) ml -gcc/4.8.5 -singularity/3.9.7 

Delete used modules

You can remove unnecessary modules. Several modules can be deleted simultaneously.

$ module unload [module name] [module name] [module name] ...
or
$ module rm [module name] [module name] [module name] ...
or
$ ml -[module name] -[module name] -[module name] ...     
ex) ml -gcc/4.8.5 -singularity/3.9.7 

Print a list of used modules

You can check the list of modules that are currently configured.

$ module list
or
$ module li
or
$ ml

Delete all used modules simultaneously.

$ module purge

Check the module installation path.

$ module show [module name]

Finding a module

$ module spider  [module | string | name/version ]

Saving and managing user module collections

# Saving currently loaded modules to the default collection,
#  which will be automatically loaded at the next login
$ module save  
# Saving currently loaded modules as a user module collection with a specified name
$ module save [name]  
# Loading a user module collection        
$ module restore [name]    
# Displaying the contents of a user module collection  
$ module describe [name]    
# Listing all user module collections  
$ module savelist  
# Deleting a user module collection             
$ module disable [name]     

2. Compiling sequential programs

A sequential program is a program that does not consider the parallel program environment. That is, it is a program that does not use a parallel program interface, such as OpenMP or MPI. A sequential program is run using only one processor in one node. Compiler-specific options used for compiling sequential programs are also used when compiling parallel programs. Hence, reference should be made to these options even if you are not interested in sequential programs.

1) Intel compiler

To use an Intel compiler, add the required version of the Intel compiler module. Available modules can be checked using the “module avail” command.

$ module load intel/19.1.2

※ Check available versions by referring to the programming tool installation status table.

  • Compiler types

Compiler
Program
Source file extension

icc / icpc

C / C++

.C, .cc, .cpp, .cxx,.c++

ifort

F77/F90

.f, .for, .ftn, .f90, .fpp, .F, .FOR, .FTN, .FPP, .F90

  • Intel compiler usage example

The following is an example of compiling a test sample file using an Intel compiler to generate a test.exe executable file.

$ module load intel/19.1.2
$ icc -o test.exe test.c
or
$ ifort -o test.exe test.f90
$ ./test.exe

※ You can copy a test sample file for job submission from /apps/shell/job_examples and use it.

2) GNU compiler

To use a GNU compiler, add the required version of the GNU compiler module. Available modules can be checked using the “module avail” command.

$ module load gcc/10.2.0

※ Check available versions by referring to the programming tool installation status table.

※ You must use version "gcc/4.8.5" or higher.

  • Compiler types

Compiler
Program
Source file extension

gcc / g++

C / C++

.C, .cc, .cpp, .cxx,.c++

gfortran

F77/F90

.f, .for, .ftn, .f90, .fpp, .F, .FOR, .FTN, .FPP, .F90

  • GNU compiler usage example

The following is an example of compiling a test sample file using a GNU compiler to generate a test.exe executable file.

$ module load gcc/10.2.0
$ gcc -o test.exe test.c
or
$ gfortran -o test.exe test.f90
$ ./test.exe

※ You can copy a test sample file for job submission from /apps/shell/job_examples and use it.

3) PGI compiler

To use a PGI compiler, add the required version of the PGI compiler module. Available modules can be checked using the “module avail” command.

$ module load nvidia_hpc_sdk/22.7

※ Check available versions by referring to the programming tool installation status table.

  • Compiler types

Compiler
Program
Source file extension

pgcc / pgc++

C / C++

.C, .cc, .cpp, .cxx,.c++

pgfortran

F77/F90

.f, .for, .ftn, .f90, .fpp, .F, .FOR, .FTN, .FPP, .F90

  • PGI compiler usage example

The following is an example of compiling a test sample file using a PGI compiler to generate a test.exe executable file.

$ module load nvidia_hpc_sdk/22.7
$ pgcc -o test.exe test.c
or
$ pgfortran -o test.exe test.f90
$ ./test.exe

※ You can copy a test sample file for job submission from /apps/shell/job_examples and use it.

3. Compiling parallel programs

1) OpenMP compiling

OpenMP is a technique developed to simplify the utilization of multi-threading through compiler directives alone. When compiling parallel programs using OpenMP, the same compiler as for sequential programs is used, with the addition of specific compiler options for parallel compilation. Most modern compilers support OpenMP directives.

Compiler option
Program
Option

icc / icpc / ifort

C / C++ / F77/F90

-qopenmp

gcc / g++ / gfortran

C / C++ / F77/F90

-fopenmp

pgcc / pgc++ / pgfortran

C / C++ / F77/F90

-mp

  • Example of compiling an OpenMP program (Intel compiler)

❍ The following is an example of compiling the test_omp sample file using the Intel compiler to create the executable file test_omp.exe with OpenMP:

$ module load intel/19.1.2
$ icc -o test_omp.exe -qopenmp test_omp.c
or
$ ifort -o test_omp.exe -qopenmp test_omp.f90
$ ./test_omp.exe
  • Example of compiling an OpenMP program (GNU compiler)

The following is an example of compiling the test_omp sample file using the GNU compiler to create the executable file test_omp.exe with OpenMP:

$ module load gcc/10.2.0
$ gcc -o test_omp.exe -fopenmp test_omp.c
or
$ gfortran -o test_omp.exe -fopenmp test_omp.f90
$ ./test_omp.exe
  • Example of compiling an OpenMP program (PGI compiler)

The following is an example of compiling the test_omp sample file using the PGI compiler to create the executable file test_omp.exe with OpenMP:

$ module load nvidia_hpc_sdk/22.7
$ pgcc -o test_omp.exe -mp test_omp.c
or
$ pgfortran -o test_omp.exe -mp test_omp.f90
$ ./test_omp.exe

2) MPI compiling

Users can execute the MPI commands listed in the table below. These commands function as wrappers, allowing the compiler specified in the .bashrc file to compile the source code.

Category

Intel

GNU

PGI

Fortran

ifort

gfortran

pgfortran

Fortran + MPI

mpiifort

mpif90

mpif90

C

icc

gcc

pgcc

C + MPI

mpiicc

mpicc

mpicc

C++

icpc

g++

pgc++

C++ + MPI

mpiicpc

mpicxx

mpicxx

Even if compiling is performed using mpicc, it is necessary to use the options that correspond to the original compiler being wrapped.

  • Example of compiling an MPI program (Intel compiler)

The following is an example of compiling the test_mpi sample file using the Intel compiler to create the executable file test_mpi.exe with MPI:

$ module load intel/19.1.2 mpi/impi-19.1.2
$ mpiicc -o test_mpi.exe test_mpi.c
or
$ mpiifort -o test_mpi.exe test_mpi.f90
$ srun ./test_mpi.exe
  • Example of compiling an MPI program (GNU compiler)

The following is an example of compiling the test_mpi sample file using the GNU compiler to create the executable file test_mpi.exe with MPI:

$ module load gcc/10.2.0 mpi/openmpi-4.1.1
$ mpicc -o test_mpi.exe test_mpi.c
or
$ mpif90 -o test_mpi.exe test_mpi.f90
$ srun ./test_mpi.exe
  • Example of compiling an MPI program (PGI compiler)

The following is an example of compiling the test_mpi sample file using the PGI compiler to create the executable file test_mpi.exe with MPI:

$ module load nvidia_hpc_sdk/22.7
$ mpicc -o test_mpi.exe test_mpi.c
or
$ mpifort -o test_mpi.exe test_mpi.f90
$ srun ./test_mpi.exe
  • Example of compiling a CUDA + MPI program

$ module load gcc/10.2.0 cuda/11.4 cudampi/openmpi-4.1.1
$ mpicc -c mpi-cuda.c -o mpi-cuda.o
$ mpicc mpi-cuda.o -lcudart -L/apps/cuda/11.4/lib64
$ srun ./a.out

※ When using the Intel compiler, load the intel/19.1.2 module instead of gcc/10.2.0

Last updated on November 08, 2024.