ANSYS CFX

This document provides basic information for using ANSYS CFX software on the Nurion system. Therefore, it does not include instructions on using ANSYS CFX software or Nurion/Linux. For information on using the 5th system/Linux, please refer to the user guide available in the resources section of the KISTI website (https://www.ksc.re.kr).

A. Usage Policy

  • A single user ID can run programs with up to 40 CPU cores. (Mpiprocs : based on the number of processes per node to be used)

  • Since a limited license is shared among supercomputing users, if usage exceeds the policy limits, the job may be forcibly terminated by the administrator.

  • To prevent overloading the system's login nodes, pre/post-processing tasks are not permitted.

  • After March PM (March 14) in 2019, job submissions without the #PBS -A ansys option will not be accepted.

B. Software Installation Information

1. Installation version

  • v145, v170, v181, v191, v192, v195, v201, v212, v221, v222, v231, v241

2. Installation Location

  • /apps/commercial/ANSYS/(version)/CFX

3. Path to the executable file

  • /apps/commercial/ANSYS/(version)/CFX/bin

※ Replace the version in the above path with the desired CFX version, such as v145, v170, v181, v191, v192, v195, v201, v221, v222, v231, or v241.

C. How to run the software

1. Execution method

  • Run the environment setup script before executing the command.

  • (Example) To use CFX v181, execute as follows:

$ module load cfx/v181

※ A module environment setup file is available. Refer to the above example to configure the appropriate environment.

  • Enter the command to run the job in batch mode as follows:

Format
cfx5solve [option]

option

-Specify the def definition file (or the result file for restart)

-Execute in parallel mode with parallel

-par-local parallel run on the local host only

-par-dist distributed parallel run

-Run the solver in part <#partition> partitioning mode

-Specify the parfile partitioning information file

- List available keywords with help

example

cfx5solve -def model.def

cfx5solve -def model.def -par-local -partition 2

cfx5solve -def model.def -parallel -parfile model.par

cfx5solve -def model.def -initial model_003.res -par-local -partition 2

  • Interactive execution on the login node is limited to 10 minutes of CPU time.

  • For long-running calculations, jobs must be submitted using the PBS Professional scheduler.

2. Writing a scheduler job script file

On the 5th system, jobs must be submitted using the PBS Professional scheduler from the login node. Example files for using the scheduler on the 5th system are available at the following path, and you can refer to them when creating your job file.

  • Example files

    • /apps/commercial/test_samples/ANSYS/cfx_v181.sh (executed on a single node)

    • /apps/commercial/test_samples/ANSYS/cfx_v181_multinode.sh (Performed on multiple nodes)

※ The example below is for using CFX on the 5th system. (Performed on a single node)

#!/bin/sh
#PBS -V                                                                   
#PBS -N cfx_job                                       # Specify the job name
#PBS -q commercial                                      # Specify the queue
#PBS –l select=1:ncpus=40:mpiprocs=40:ompthreads=1   # Specify the job chunk
#PBS –l walltime=04:00:00                  # Specify the estimated job duration
#PBS -A ansys

cd $PBS_O_WORKDIR

###### Do not edit #####
TOTAL_CPUS=$(wc -l $PBS_NODEFILE | awk '{print $1}')
#######################

module purge
module load cfx/v181

# Execute the CFX command
cfx5solve -def StaticMixer.def -par-local -partition ${TOTAL_CPUS} 
  • The example above should be modified as appropriate by the user.

  • After March PM (March 14) in 2019, job submissions without the #PBS -A ansys option will not be accepted.

  • Since each user’s home directory has a disk quota limit, it is recommended to perform tasks in the scratch directory.

  • Each user's scratch directory is located at /scratch/$USER.

  • Since scratch disks are deleted after a certain period following job completion, it is recommended to back up your data as soon as the job is finished.

  • For other scheduler commands and usage, please refer to the user guide.

D. Job monitoring

1. Queue inquiry

$ showq

2. Node status inquiry

$ pbs_status

3. Checking the job status

- View currently running/queued jobs 
$ qstat -u $USER

- View including the completed jobs
$ qstat -xu $USER

$ qstat <-a, -n, -s, -H, -x, …>
ex> qstat
Job id Name User Time Use S Queue
-------------------------------------------------------------------
0001.pbcm test_01 user01 8245:43: R commercial
0002.pbcm test_03 user03 7078:45: R commercial
0003.pbcm test_04 user04 1983:11: Q commercial

4. How to submit a job

Example : If the script file name is cfx.sh

$ qsub cfx.sh

5. Forcefully terminate a submitted job

  • Usage : qdel {job ID}

  • The job ID is the information displayed on the far left when the qsub command is executed. (ex. 0001.pbcm test_01 user01 8245:43: R norm_cache)

  • Example : If the job ID is 0001.pbs

$ qdel 0001.pbs

Last updated on November 06, 2024.

Last updated