User Programming Environment
A. Programming Tool Installation Status
Compiler and library module
Commercial software information
※ User groups that can use ANSYS are limited to universities, industries (small and medium-sized enterprises), and research institutes. Please note that the use of ANSYS by users who are not in the available user group or who have not applied for use may be subject to legal sanctions by ANSYS.
※ For Gaussian, obtain permission for use from the helpdesk account manager (account@ksc.re.kr) first
※ Refer to [Annex 5] for the installation status of shared libraries (e.g.: cairo, expat, jasper, libpng, udunits, etc.)
B. How to Use Compiler
1. Compiler and MPI configuration settings (modules)
1) Default required module
One corresponding module must be added to the computing node being used.
2) Basic commands related to the module
Print a list of available modules A list of modules that are available such as compiler and library can be checked.
Add a module to be used Modules to be used such as compiler and library can be added. All modules to be used can be added at once.
Delete modules being used Remove the modules that are no longer needed. Multiple modules can be deleted at once.
Print a list of modules being used. A list of currently set modules can be checked.
Purge all modules being used
※ In this case, the default required modules are also deleted at once; those modules need to be added again when reusing them.
2. Compiling sequential programs
A sequential program refers to a program in which a parallel program environment is not considered. Specifically, it is a program that does not use a parallel program interface such as OpenMP or MPI and is not used where the program is executed using one processor in one node. Options per compiler used for compiling a sequential program are also used when compiling a parallel program; thus, it is recommended to reference them even if a sequential program is not of interest.
1) Intel compiler
A version of an Intel compiler module required for using the Intel compiler should be added. Available modules can be checked with the command module avail.
※ Check the available version by referring to the programming tool installation status table.
Compiler type
Compiler option
Example of using Intel compiler
The following is an example of creating an execution file test.exe by compiling a test sample file with the Intel compiler in the KNL computing node.
※ Copy the test sample file for job submission in /apps/shell/home/job_examples
Recommended options
2) GNU compiler
Add a GNU compiler module required for using the GNU compiler. Available modules can be checked with the command module avail.
※ Check the available version by referring to the programming tool installation status table.
※ Must use "gcc/6.1.0” version or higher
Compiler type
GNU compiler option
Example of using GNU compiler
The following is an example of creating an execution file test.exe by compiling a test sample file with the GNU compiler in the KNL computing node.
※ Copy the test sample file for job submission in /apps/shell/home/job_examples
Recommended options
3) PGI compiler
Add a PGI compiler module version required for using the PGI compiler to be used. Available modules can be checked with the command module avail.
※ Check the available version by referring to the programming tool installation status table.
Compiler type
PGI compiler option
Example of using PGI compiler
The following is an example of creating an execution file test.exe by compiling a test sample file with the PGI compiler in the KNL computing node.
※ Copy the test sample file for job submission in /apps/shell/home/job_examples
Recommended options
4) Cray compiler
Add a Cray compiler module version required for using the Cray compiler to be used. Available modules can be checked with the command module avail.
※ Check the available version by referring to the programming tool installation status table.
Compiler type
Compiler option
Example of using Cray compiler
The following is an example of creating an execution file test.exe by compiling a test sample file with the PGI compiler in the KNL computing node.
※ Copy the test sample file for job submission in /apps/shell/home/job_examples
Recommended options
※ test.c and test.f90 for testing can be found in /apps/shell/home/job_examples (test by copying to the user directory)
※ For programs to use the KNL optimization option, it is recommended to access them through interactive job submission by the KNL debug node and then compile (refer to “Job execution through scheduler → B. Job submission monitoring → 2) Interactive job submission”).
3. Compiling parallel programs
1) OpenMP compile
OpenMP is a technique simply developed to enable multi-thread utilization only by a compiler directive. A compiler used for compiling parallel programs using OpenMP is the same as that of sequential programs. A compiler option can be added for parallel compilation, and most compilers currently support the OpenMP directive.
Example of OpenMP program compilation (Intel compiler)
The following is an example of creating an execution file test_omp.exe by compiling a test_omp sample file that uses OpenMP with the Intel compiler in the KNL computing node.
Example of OpenMP program compilation (GNU compiler)
The following is an example of creating an execution file test_omp.exe by compiling a test_omp sample file that uses OpenMP with the GNU compiler in the KNL computing node.
Example of OpenMP program compilation (PGI compiler)
The following is an example of creating an execution file test_omp.exe by compiling a test_omp sample file that uses OpenMP with the PGI compiler in the KNL computing node.
Example of OpenMP program compilation (Cray compiler)
The following is an example of creating an execution file test_omp.exe by compiling a test_omp sample file that uses OpenMP with the Cray compiler in the KNL computing node.
2) MPI compiler
Users can execute the MPI commands in the following table, and these commands are a type of wrapper where a designated compiler compiles the source through .bashrc.
Even when compiled through mpicc, the options corresponding to the original compiler being wrapped must be used.
Example of MPI program compilation (Intel compiler)
The following is an example of creating an execution file test_mpi.exe by compiling a test_mpi sample file that uses MPI with the Intel compiler in the KNL computing node.
Example of MPI program compilation (GNU compiler)
The following is an example of creating an execution file test_mpi.exe by compiling a test_mpi sample file that uses MPI with the GNU compiler in the KNL computing node.
Example of MPI program compilation (PGI compiler)
The following is an example of creating an execution file test_mpi.exe by compiling a test_mpi sample file that uses MPI with the PGI compiler in the KNL computing node.
Example of MPI program compilation (Cray compiler)
The following is an example of creating an execution file test_mpi.exe by compiling a test_mpi sample file that uses MPI with the Cray compiler in the KNL computing node.
C. Debugger and Profiler
The 5th supercomputer Nurion beta service provides DDT for program debugging by users. Furthermore, two profilers, namely Intel vtune and CaryPat, are provided for the program profiling of users.
1. Example of using debugger DDT
Select the architecture, compiler, and MPI to be used for using DDT in the 5th supercomputer, and then select a module for using DDT.
This example was tested in the identical environment as above.
Select an execution file by adding the -g -O0 option when compiling as preparation before using DDT.
After running xming and completing the settings of the SSH X environment on a user’s desktop, execute the DDT execution command.
Execute the command to see if the following pop-up execution window appears.
Select “RUN” among the listed commands, select the file for debugging as shown below, and then click “RUN” in the new pop-up window.
Debugging can be initiated by entering the debugging mode for the following selected execution file.
2. Example of using profiler Intel vtune Amplifier
Select the architecture, compiler, and MPI for using a profiler vtune in this system, and then select vtune to use the profiler.
This example was tested in an identical environment as above.
How to use CLI
The command for executing the Intel vtune Amplifier in CLI mode is as follows.
If a compiled execution file is prepared and executed according to the command, the r000hs directory is generated. After confirming that the directory has been generated, the command for generating a report is executed to output the result shown below.
How to check the result of using GUI
Intel vtune Amplifier also supports the GUI mode. Only the method for checking the result using GUI is explained here.
Run xming on a user’s desktop
Click “New Analysis” in the screen below.
When the screen shown below appears, check the number of CPUs and click the Start button to begin the analysis.
Once completed, the analysis results are summarized in multiple tabs as shown below.
3. Example of using profiler Cary-Pat
The environment including the architecture is set as below for using CaryPat, which is a profiler, and the example was initiated.
First, test.c file to be used in the example is compiled.
As a result, an execution file a.out is generated.
For analyzing with CaryPat, use pat_build to generate a new execution file.
As a result, a.out+pat file is generated.
The generated execution file is written with MPI; thus, it is executed with mpirun.
Once execution is complete, the a.out+pat+378250-3s directory is generated, and the xf-files/002812.xf file is created in the directory.
When pat_report is executed as above, .ap2 and .apa files are created in the a.out+pat+378250-3s directory.
When the execution file is created again using the .apa file, the file named a.out+apa is created.
When the generated a.out+apa file is executed
a new xf file is generated in a.out+pat+378250-3t.
Reuse pat_report to process the new data.
When the file is executed as above, the ap2 file and tracing report are created.
app2 is provided as a method for visualizing the collected data.
Visualization results are produced as shown below.
2022년 9월 15일에 마지막으로 업데이트되었습니다.
Last updated