2BP: 2-Stage Backpropagation

paper review

Published

Tuesday, September 17, 2024

in (Shyam et al. 2024) the authors …

As Deep Neural Networks (DNNs) grow in size and complexity, they often exceed the memory capacity of a single accelerator, necessitating the sharding of model parameters across multiple accelerators. Pipeline parallelism is a commonly used sharding strategy for training large DNNs. However, current implementations of pipeline parallelism are being unintentionally bottlenecked by the automatic differentiation tools provided by ML frameworks. This paper introduces 2-stage backpropagation (2BP). By splitting the backward propagation step into two separate stages, we can reduce idle compute time. We tested 2BP on various model architectures and pipelining schedules, achieving increases in throughput in all cases. Using 2BP, we were able to achieve a 1.70x increase in throughput compared to traditional methods when training a LLaMa-like transformer with 7 billion parameters across 4 GPUs.

References

Shyam, Vasudev, Jonathan Pilault, Emily Shepperd, Quentin Anthony, and Beren Millidge. 2024. “Tree Attention: Topology-Aware Decoding for Long-Context Attention on GPU Clusters.” https://arxiv.org/abs/2408.04093.