Deep Learning Framework Parallelization
A. How to Use Horovod in Tensorflow
When utilizing CPUs across multiple nodes, Horovod can be integrated with TensorFlow to enable parallelization. By adding code for Horovod as shown in the example below, it can be integrated with TensorFlow. Both Tensorflow and the Keras API within Tensorflow can be integrated with Horovod. First, we will introduce how to integrate Horovod with Tensorflow.(Example: MNIST Dataset and LeNet-5 CNN structure)
※ For detailed instructions on using Horovod with TensorFlow, refer to the official Horovod guide (https://github.com/horovod/horovod#usage)
1. Importing and initializing Horovod in the main function for use with TensorFlow
※ horovod.tensorflow: The module required to integrate Horovod with TensorFlow
※ Initialize Horovod to enable its use.
2. Setting the dataset in the main function for using Horovod
※ Set and create the dataset based on the Horovod rank to assign datasets to each task.
3. Setting Horovod-related options in the optimizer, and broadcast and training steps in the main function
※ Apply Horovod-related settings to the optimizer and use broadcasting to distribute these settings to each task.
※ Set the training steps for each task according to the number of Horovod tasks.
4. Setting parallelism for inter-operation and intra-operation
※ config.intra_op_parallelism_threads: This is used to set the number of threads for computational tasks, applying the OMP_NUM_THREADS value specified in the job script (in this example, OMP_NUM_THREADS is set to 32).
※ config.intra_op_parallelism_threads: This specifies the number of threads that execute TensorFlow operations concurrently. If set to 2, as in the example, two operations will run in parallel.
5. Checkpoint Settings for Rank 0
※ Since checkpoint saving and loading should be performed by a single process, configure it on rank 0
B. How to Use Multiple Nodes in Intel Caffe
Multi-node parallelism in Caffe is not officially supported by Horovod, but parallel processing can be achieved using Intel Caffe, which has been optimized by Intel for KNL. In the case of Intel Caffe, all tasks required for parallel processing are integrated into the development process, allowing you to use deploy.prototxt, solver.prototxt, and train_val.prototxt files developed in standard Caffe without modification.
※ For detailed instructions on using Intel Caffe, refer to the official Intel Caffe guide. (https://github.com/intel/caffe/wiki/Multinode-guide)
If you need to perform parallel processing on Caffe code that has been modified by a deep learning developer, the corresponding parts of the Intel Caffe source code must be updated, compiled, and then executed.
Method for Performing Parallel Processing in Intel Caffe (Job Script Example)
※ Network Option: Set to Intel Omni-Path Architecture (OPA)
※ PPN: Abbreviation for process per node, indicating the number of processes per node (default: 1)
※ It is possible to run MPI without using a script, executing it in the same manner as the standard Caffe process
Last updated on November 08, 2024.
Last updated