How to Use Deep Learning Framework Parallelization (Horovod)
A. How to Use Horovod in TensorFlow
It is possible to parallelize by linking Horovod with TensorFlow when adopting multiple GPUs across multiple nodes. As presented in the following example, Horovod can be linked with TensorFlow by introducing a code for using Horovod. Both TensorFlow and all the Keras APIs that can be adopted in TensorFlow can be linked with Horovod. First, we introduce how to link Horovod with TensorFlow. (Example: MNIST Dataset and LeNet-5 CNN structure)
※ Refer to the official Horovod guide for detailed information on how to link Horovod with TensorFlow. (https://github.com/horovod/horovod#usage)
The import statement for linking Horovod with TensorFlow and the Horovod initialization in the main function
※ horovod.tensorflow: a module for linking Horovod with TensorFlow
※ Horovod is initialized; hence, it can be used.
Set the dataset to use Horovod in the main function
※ The dataset to be accessed for each job is set and created according to the Horovod rank.
Set the Horovod-related settings, broadcast, and number of training epochs for the optimizer in the main function
※ Apply Horovod-related settings to the optimizer and use broadcast to convey them to each job.
※ Set the training process step of each job according to the number of Horovod jobs.
Allocate GPU devices according to the Horovod process rank
※ Allocate a single job for each GPU according to the Horovod local rank.
Set checkpoint for the rank 0 job
※ The job that involves saving or retrieving a checkpoint must be carried out by a single process; hence, it is set to rank 0.
B. How to Use Horovod in Keras
By linking Keras with Horovod, parallelization is possible even when Keras APIs are adopted in TensorFlow. As shown in the following example, Horovod can be linked with Keras by introducing a code for using Horovod. (Example: MNIST Dataset and LeNet-5 CNN structure)
※ Refer to the official Horovod guide for detailed information on how to link Horovod with Keras. (https://github.com/horovod/horovod/blob/master/docs/keras.rst)
The import statement for linking Horovod with Keras and the Horovod initialization in the main function
※ horovod.tensorflow.keras: a module for using Horovod with Keras in TensorFlow
※ Horovod is initialized; hence, it can be used.
Allocate GPU devices according to the Horovod process rank
※ Allocate a single job for each GPU according to the Horovod local rank.
Set the Horovod-related settings, broadcast, and number of training epochs for the optimizer in the main function
※ Set the training process step of each job according to the number of Horovod jobs.
※ Apply Horovod-related settings to the optimizer and use broadcast to convey them to each job.
Set checkpoint for the rank 0 job
※ The job that involves saving or retrieving a checkpoint must be performed by a single process; hence, it is set to rank 0.
Allocate GPU devices according to the Horovod process rank
※ To print phrases that are solely output during training from the rank 0 job, the verbose value is set to 1 for the rank 0 job alone.
C. How to Use Horovod in PyTorch
It is possible to parallelize by linking Horovod with PyTorch when employing multiple GPUs across multiple nodes. As presented in the following example, Horovod can be linked with PyTorch by introducing a code for using Horovod. (Example: MNIST Dataset and LeNet-5 CNN structure)
※ Refer to the official Horovod guide for detailed information on how to use Horovod in PyTorch. (https://github.com/horovod/horovod/blob/master/docs/pytorch.rst)
The import statement for linking Horovod with PyTorch and the Horovod initialization in the main function
※ torch.utils.data.distributed: module for performing distributed training in PyTorch
※ horovod.torch: module for adopting Horovod with PyTorch
※ Horovod is initialized, and the device that will execute the job is set according to the rank that was set in the initialization process.
※ To use one CPU thread for each job, torch.set_num_threads(1) is employed.
Add Horovod-related information in the training process
※ train_sampler.set_epoch(epoch): sets the train sampler’s epoch
※ Because the training dataset is split across and processed by several jobs, len(train_sampler) is adopted to verify the total dataset size.
Calculate the average value using Horovod
※ The average value is calculated using the Allreduce communication method of Horovod to calculate the average value across several nodes.
Add Horovod-related information in the test process
※ The metric_average function presented above is adopted because the calculation of the average value is required across several nodes.
※ Because each node has the same calculated values for loss and accuracy via the Allreduce communication, rank 0 executes the print function.
Set dataset to use Horovod in the main function
※ The dataset to be accessed for each job is set and created according to the Horovod rank.
※ Set the distributed sampler of PyTorch and assign it to the data loader.
Add Horovod-related settings to the optimizer and the sampler to the training and test process in the main function
※ Apply Horovod-related settings to the optimizer and use broadcast to convey them to each job.
※ Add the sampler to the training and test processes, and pass it to each function.
2021년 12월 2일에 마지막으로 업데이트되었습니다.
Last updated