2BP: 2-Stage Backpropagation

paper review

Author

Oren Bochman

Published

Friday, December 20, 2024

in (Shyam et al. 2024) the authors …

Shyam, Vasudev, Jonathan Pilault, Emily Shepperd, Quentin Anthony, and Beren Millidge. 2024. “Tree Attention: Topology-Aware Decoding for Long-Context Attention on GPU Clusters.” https://arxiv.org/abs/2408.04093.

As Deep Neural Networks (DNNs) grow in size and complexity, they often exceed the memory capacity of a single accelerator, necessitating the sharding of model parameters across multiple accelerators. Pipeline parallelism is a commonly used sharding strategy for training large DNNs. However, current implementations of pipeline parallelism are being unintentionally bottlenecked by the automatic differentiation tools provided by ML frameworks. This paper introduces 2-stage backpropagation (2BP). By splitting the backward propagation step into two separate stages, we can reduce idle compute time. We tested 2BP on various model architectures and pipelining schedules, achieving increases in throughput in all cases. Using 2BP, we were able to achieve a 1.70x increase in throughput compared to traditional methods when training a LLaMa-like transformer with 7 billion parameters across 4 GPUs.

Citation

BibTeX citation:
@online{bochman2024,
  author = {Bochman, Oren},
  title = {2BP: {2-Stage} {Backpropagation}},
  date = {2024-12-20},
  url = {https://orenbochman.github.io/reviews/2024/two-stage-backpropagation/},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2024. “2BP: 2-Stage Backpropagation.” December 20, 2024. https://orenbochman.github.io/reviews/2024/two-stage-backpropagation/.