Nurion guide
누리온 지침서뉴론 지침서활용정보MyKSC 지침서
  • Nurion guidelines
  • SYSTEM
    • System Specifications and Configuration
    • User Environment
    • User Programming Environment
    • Job Execution through Scheduler (PBS)
    • User Support
  • Software
    • ANSYS FLUENT
    • ANSYS CFX
    • Abaqus
    • NASTRAN
    • Gaussian16 LINDA
    • Gaussian16
  • APPENDIX
    • Main Keywords for Job Scripts
    • Conda
    • Singularity Container
    • Lustre stripe
    • Data Archiving
    • MVAPICH2 Performance Optimization Options
    • Deep Learning Framework Parallelization
    • List of Common Libraries
    • Desktop Virtualization (VDI)
    • Burst Buffer
    • Flat node
    • DTN (Data Transfer Node)
  • External Link
    • Nurion Guide(Kor)
    • Neuron Guide(Kor)
Powered by GitBook
On this page
  • A. Build a container image
  • 1. Load the Singularity Module or Set the Path
  • 2. local build
  • 3. Remote build
  • 4. Importing/Exporting Container Images
  • C. Running User Programs in a Singularity Container
  • 1. Load the Singularity module or set the path
  • 2. Command to run a program in a Singularity container
  • 3. How to Run a Container via the Scheduler (PBS)
  • D. References
  1. APPENDIX

Singularity Container

PreviousCondaNextLustre stripe

Last updated 6 months ago

Singularity is a container platform suitable for HPC environments, similar to Docker, designed to implement OS virtualization. You can create a container image that includes the Linux distribution, compiler, and libraries suitable for your working environment, and then run the container to execute your application.

※ Virtual machines have a structure where applications run through a hypervisor and guest OS, whereas containers are closer to the physical hardware and share the host OS rather than using a separate guest OS, resulting in lower overhead. The use of containers in cloud services has been increasing recently

A. Build a container image

1. Load the Singularity Module or Set the Path

$ module load singularity/3.11.0
or
$ $HOME/.bash_profile
export PATH=$PATH:/apps/applications/singularity/3.11.0/bin/

2. local build

  • To build a container image locally on the Nurion system's login node, you must first apply for the use of fakeroot by submitting a request through the KISTI website > Technical Support > Consultation Request with the following details.

    • System Name : Nurion

    • User ID : a000abc

    • Request : Singularity fakeroot usage setting

$ singularity [global options...] build [local options...] <IMAGE PATH> <BUILD SPEC>

[Main global options]
    -d : print debugging information
    -v : print additional information
    --version : print singularity version information

[Relevant key local options]
    --fakeroot : Build image as fake root user by normal user without root permission 
    --remote : Remote build via external Singularity Cloud (Sylabs Cloud) (no root permission required)
    --sandbox : Build a writable image directory in a sandbox format

<IMAGE PATH>
   default : Default read-only image file (e.g. : ubuntu1.sif)
   sandbox : A container with a readable and writable directory structure (e.g. : ubuntu4) 

<BUILD SPEC>
definition file : A file that defines a recipe to build a container (Example : ubuntu.def)
local image : Singularity image file or sandbox directory (see IMAGE PATH)
URI 
library:// container library (default https://cloud.sylabs.io/library) 
docker:// docker registry (default docker hub)
shub:// singularity registry (default singularity hub)
oras:// OCI Registry

① Build ubuntu1.sif image from definition file
 $ singularity build --fakeroot ubuntu1.sif ubuntu.def* 

② Build ubuntu2.sif image from singularity library
 $ singularity build --fakeroot ubuntu2.sif library://ubuntu:18.04 

③ Build ubuntu3.sif image from Docker Hub
 $ singularity build --fakeroot ubuntu3.sif docker://ubuntu:18.04 
 
④ Building a PyTorch image optimized for Intel architecture from Docker Hub. 
 $ singularity build --fakeroot pytorch1.sif docker://intel/intel-optimized-pytorch:2.3.0-pip-multinode

⑤ Building a PyTorch image optimized for Intel architecture from a definition file.
 $ singularity build --fakeroot pytorch2.sif pytorch.def**
 
* ) ubuntu.def example
 bootstrap: docker
 from: ubuntu:18.04
 %post
 apt-get update
 apt-get install -y wget git bash gcc gfortran g++ make file
 %runscript
 echo "hello world from ubuntu container!"

** ) pytorch.def example
 # Building an image from a local image file, including the installation of new packages using Conda
 bootstrap: localimage
 from: pytorch1.sif
 %post
 pip install scikit-image
 
 # Build an image from Docker Hub, including the installation of new packages
 bootstrap: docker
 from: intel/intel-optimized-pytorch:2.3.0-pip-multinode
 %post
 pip install scikit-image

3. Remote build

$ singularity build --remote ubuntu4.sif ubuntu.def 
 (Build the ubuntu4.sif image from a definition file using the remote build service provided by Sylabs Cloud)

※ To use the remote build service provided by Sylabs Cloud (https://cloud.sylabs.io), an access token needs to be generated and registered on the Nurion system [Reference 1].

※ Additionally, the creation and management of Singularity container images can be done via a web browser by accessing Sylabs Cloud [Reference 2].

4. Importing/Exporting Container Images

$ singularity pull tensorflow.sif library://dxtr/default/hpc-tensorflow:0.1 (Import a container image from the Sylabs Cloud library)
$ singularity pull tensorflow.sif docker://tensorflow/tensorflow:latest (Import an image from Docker Hub and convert it to a Singularity image)
$ singularity push -U tensorflow.sif library://ID/default/tensorflow.sif (Export (upload) a Singularity image to the Sylabs Cloud library)

※ To export (upload) an image to Sylabs Cloud (https://cloud.sylabs.io), you must generate an access token and register it on Nurion [Reference 1]

C. Running User Programs in a Singularity Container

1. Load the Singularity module or set the path

$ module load singularity/3.11.0
or
$ $HOME/.bash_profile
export PATH=$PATH:/apps/applications/singularity/3.11.0/bin/

2. Command to run a program in a Singularity container

$ singularity [global options...] shell [shell options...] <container>
$ singularity [global options...] exec [exec options...] <container> <command>
$ singularity [global options...] run [run options...] <container>

① Execute the shell within the Singularity container, then run the user program 
$ singularity shell pytorch1.sif
Singularity> python test.py

② Run the user program in the Singularity container
$ singularity exec pytorch1.sif python test.py 
$ singularity exec docker://intel/intel-optimized-pytorch:2.3.0-pip-multinode python test.py
$ singularity exec library://dxtr/default/hpc-tensorflow:0.1 python test.py

③ If a runscript (created during image build) exists in the Singularity container, this script will be executed.    
   If there is no runscript and a user command is entered after the container, the specified command will be executed.
$ singularity run ubuntu1.sif 
hello world from ubuntu container!

$ singularity run pytorch1.sif python test.py 

3. How to Run a Container via the Scheduler (PBS)

1) Write a job script to execute the task in batch mode

  • Run command : qsub<job script file>

[id@login01]$ qsub job_script.sh
14954055.pbs

※ For detailed instructions on using the scheduler (PBS), refer to the Nurion Guide - Executing Jobs via Scheduler (PBS).

2) Example of a Job Script File

  • Serial job

* Run command : qsub<job script file>

#!/bin/sh
#PBS -N openfoam
#PBS -q normal
#PBS -A openfoam
#PBS -V
#PBS -j oe
#PBS -W sandbox=PRIVATE
#PBS -m e
#PBS -M wjnadia@kisti.re.kr
#PBS -r y
#PBS -l select=1:ncpus=1:mpiprocs=1:ompthreads=1
#PBS -l walltime=00:30:00

cd $PBS_O_WORKDIR
module load singularity/3.11.0 
cd cavity
singularity run openfoam-default:2312.sif icoFoam
  • Serial job

* Run command: mpirun singularity run <container> [user program execution command]

#!/bin/sh
#PBS -N openfoam
#PBS -q normal
#PBS -A openfoam
#PBS -V
#PBS -j oe
#PBS -W sandbox=PRIVATE
#PBS -m e
#PBS -M wjnadia@kisti.re.kr
#PBS -r y
#PBS -l select=2:ncpus=64:mpiprocs=64:ompthreads=1
#PBS -l walltime=00:30:00

cd $PBS_O_WORKDIR
module load singularity/3.11.0 gcc/8.3.0 openmpi/3.1.0
cd cavity
mpirun singularity run openfoam-default:2312.sif icoFoam

※ Example of using 2 nodes with 64 tasks per node (total of 128 MPI processes)

3) Executing interactive jobs on compute nodes allocated by the scheduler

  • Example of running a parallel program (OpenMPI)

[id@login01]$ qsub -I -l select=2:ncpus=64:mpiprocs=64:ompthreads=1 -l walltime=00:30:00 \
-q normal  -A openfoam
qsub: waiting for job 14954204.pbs to start
qsub: job 14954204.pbs ready

[id@node1000]$ 
[id@node1000]$ module load singularity/3.11.0 gcc/8.3.0 openmpi/3.1.0
[id@node1000]$ cd cavity
[id@node1000]$ mpirun singularity run openfoam-default:2312.sif icoFoam 
  • Example of running a TensorFlow program

$ qsub -I -V -l select=1:ncpus=68:ompthreads=68 \
-l walltime=12:00:00 -q normal -A tf

$ export OMP_NUM_THREADS=68; singularity exec tensorflow-1.12.0-py3.simg python convolutional.py

※ Example Singularity image file location: /apps/applications/tensorflow/1.12.0/tensorflow-1.12.0-py3.simg

※ Example convolutional.py file location: /apps/applications/tensorflow/1.12.0/examples/convolutional.py

D. References

[Reference 1] Generating a Sylabs Cloud access token and registering on Nurion

[Reference 2] Building a Singularity container image using a remote builder via a web browser

※ Includes a list of images remotely built with singularity commands in Nurion

Comparison between Virtual Machine and Container Architectures
Singularity Container Architecture