TorchDynamo Update 9: Making DDP Work with TorchDynamo

TL;DR: Previously, torchdynamo interrupted compute-communication overlap in DDP to a sufficient degree that DDP training with dynamo was up to 25% slower than DDP training with eager. We modified dynamo to add additional graph breaks when DDP is detected in order to restore opportunities for compute-communication overlap. With these new changes, DDP with dynamo is never more than 1% slower than DDP with eager and up to 15% faster than eager on 64 gpus when compiled with torchinductor. These results are based on benchmarks from 6 OSS models.

TorchDynamo

If you are new to TorchDynamo, the links below will allow you to catch up on the new exploration. TorchDynamo generates FX graph from Python bytecode and various backends are integrated with TorchDynamo to complete inference/training of the model. In the future, with the help of a cost model, TorchDynamo could automate the selection of the best backend for each subgraph to achieve optimal performance.

Background - why Dynamo doesn’t work well with DDP

DDP (Distributed Data Parallel) is a tool for distributed training. It’s used for synchronously training single-gpu models in parallel.

DDP training generally goes as follows:

  1. Each rank will start with an identical copy of a model. A rank is a process; different ranks can be on the same machine (perhaps on different gpus) or on different machines.
  2. Pass a different batch of data to each rank
  3. Run the forward pass (on each rank)
  4. Run the backward pass (on each rank). Now, the gradients on each rank will be different because different data was used on each node
  5. Synchronize gradients with an allreduce call. An allreduce call will communicate across different ranks so that after the allreduce, the gradients on all the ranks will be identical (e.g. they will all be the average of the gradients across the various ranks)
  6. Run the optimizer

In the figure above, we see that steps 4 and 5 are combined by allowing allreduces to start before the backward pass has completed. This speeds up the training process by overlapping some of the communication with the rest of the computation of the backward pass. This is how eager DDP training works today. By default, the allreduces are gathered into “buckets”, which are chosen via a heuristic that tries to produce 25MB buckets (the size is configurable).

However - once we enable dynamo, dynamo compiles the individual kernels into a single graph (or a small number of graphs, if graph breaks occur). Then synchronization can’t occur until the entire backward pass has completed.

In the diagram above, we see that with naive usage of dynamo, DDP allreduces don’t start until the entire backward pass computation is finished. In many cases, the lost opportunity for communication/compute overlap can be more significant than the speedup provided by inductor.

Solution

@wconstab implemented the DDPOptimizer, which does the following:

  • Detects if DDP is active
  • If so, it identifies the DDP bucket size and splits the dynamo graph into subgraphs, so that the sum of the parameter sizes in each subgraph is roughly equal to the bucket size.

The heuristic used by the DDPOptimizer won’t always produce buckets that are identical to those produced by eager DDP; we are assuming that the eager DDP strategy heuristic isn’t perfect either, especially in cases where additional graph breaks may occur.

Results

Without the DDPOptimizer, we can compare DDP+dynamo latency to DDP+eager latency, and we find that for >1 node, dynamo can sometimes perform as much as 25% worse than eager.

nov14_summary_nooptimizer

The figure above shows latency comparisons between eager and inductor on DDP training, without DDPOptimizer. For example, the ~25% slowdown in timm_VIT is based on ~1720ms eager latency on 64 gpus, compared to ~2300ms inductor latency on 64 gpus.

With DDPOptimizer enabled, we do the same comparison and see that DDP performs no more than 1% worse than eager - and is up to 15% faster in some of the 64-gpu configurations

nov14_summary_withoptimizer

We can also compare the speedup that DDPOptimizer provides for each of the models. This chart compares DDP + dynamo latency to DDP + dynamo + DDPOptimizer latency:

nov14_optimizer_vs_none

We see that in most cases DDPOptimizer provides very little benefit (or even a slowdown) for the 1-node (i.e 8 GPU) configuration, where communication takes less time. But for the multi-node configuration where communication goes over a network, we see larger speedups, especially for larger models like hf_T5_large or timm_VIT.

Correctness

@wconstab ran the entire dynamo benchmark suite with DDPOptimizer on a single GPU; only 1 new issue was encountered, an AOTAutograd issue that was filed separately.

We also did correctness checks on the 6 OSS models, using the same multi-node setup used for the performance measurements. Note that these benchmarks were performed using a separate torchbench benchmark setup, so we wanted to do a minimal set of correctness checks to verify correctness on the same suite used for measuring performance. On these tests, we found 5/6 models passed correctness; the remaining model (resnet50) is also failing on non-distributed inductor tests.

Benchmark configuration

  • Hardware - AWS cluster, A100 40GB
  • 8 nodes, 8 gpus per node = 64 GPUs
  • EFA network configuration
  • DDP run with static_graphs=False
  • Models (batch size):
    • resnet50 (128) [Note: we also collected results with batch_size=32, when communication latency dominates: in this case, we see no speedup]
    • hf_T5 (12)
    • hf_T5_large (4)
    • timm_vision_transformer_large (16)
    • hf_GPT2_large (4)
    • hf_Bert (32)
  • Results are based on 19 samples taken over a 30-hour period, on a static set of nodes on the AWS cluster.

Caveats

1. DDP needs to be run with static_graph=False.

Static graph is an optimization for eager DDP. It relies on assumptions about the behavior of the program remaining the same - e.g. gradients for the same set of parameters must always be made available in the same order on each invocation. It allows a few optimizations:

  • Re-ordering buckets to more accurately match the actual execution order
  • Skipping the find_unused_parameter step, which usually needs to run each iteration to identify which parameters are needed in the backward pass.
  • Activation checkpointing

In the 6 OSS models we tested, we don’t see any measurable impact on performance from static_graph=True (at least in eager mode). However, some other models are known to benefit from these optimizations.

Unfortunately, dynamo + DDP currently doesn’t work with static_graph=True. (This is because DDP interprets any tracing as a first step during which it intends to collect data about the run; and then subsequent iterations fail some assertions).

We expect that it should be possible to add some workarounds to support this - but currently, static_graph needs to be turned off to work with dynamo.

2. Cudagraphs cause OOM.

Cudagraphs show performance improvements in many scenarios, but also increase memory usage in other scenarios. Because DDPOptimizer creates additional graphs, it exacerbates these memory issues. Therefore we expect many users to need to turn off cudagraphs to be able to run with DDPOptimizer.

Next Steps

  • FSDP - @wconstab and @aazzolini have already started investigating issues that arise when running dynamo with FSDP models.
  • Better integration with DDP could possibly provide support for static_graph=True, or better performance improvements. Currently, DDPOptimizer makes a best attempt at matching DDP’s buckets; and then DDP re-buckets again based on its own heuristics, which may not always match DDPOptimizer. This could result in delayed allreduce calls. If instead DDPOptimizer could provide its bucket choices to DDP, this wouldn’t be a problem.

Appendix

Reproducing benchmark results

Huge thanks to @sanketpurandare; the benchmark scripts were adapted from his script and results from his initial work on DDP benchmarking. He also provided a lot of help debugging bad performance.

OSS DDP benchmarks can be run using the same script at:

python userbenchmark/ddp_experiments/__init__.py --job_dir /full/path/to/shared/directory --repeat 20

Note that the script is mostly targeted for use with a slurm-based setup.

Feel free to adapt these scripts for your own distributed benchmarking.

We’re working on merging this into the typical userbenchmark workflow for torchbench, so that testing can be automated. A big thanks goes to Xu Zhao for his help

A few lessons we learned when running these experiments:

  • Run experiments on the same slurm allocation to ensure that the same hardware is used for all measurements. This is especially important for distributed benchmarking where network topology can
  • Don’t build pytorch with DEBUG=1 when you want to run benchmarks! While this is generally true, it’s particularly true with distributed benchmarks; we found that NCCL on EFA performs very badly with DEBUG=1 (50x slower in some configurations).
  • Watch out for issues with CPU/GPU affinity; we suspect that this caused issues with stragglers that may have increased noise in some measurements.
12 Likes

Is there a comparison of memory usage data before and after using DDPOptimizer? Especially those models that can capture the full graph before. thanks

Hello!
Can I just split the dynamo backward graph into subgraphs if using AOTAutograd to get forward graph and backward graph?
With the DDPOptimizer, is it necessary that splitting dynamo forward graph into subgraphs?
Thanks

The problem with splitting post AOTAutograd is… it is missing the point, we need a separate AOTAutograd function per pipeline stage because otherwise the hooks won’t be fired in a timely manner. AOTAutograd is what actually makes the functions, so if you split after AOTAutograd made a single function, well, you failed to actually make the hooks trigger soon enough.

Unless everything is made lazy so hooks are fired way before execution even without splitting the graph. Of course that would work only in no-python mode, i.e. whole graph in dynamo.

Imo, the ideal state is simply to include the collectives into the graph, which is the eventual plan.

What’s the plan to fire up other hooks timely without splitting the graph?

There’s a lot undetermined here, but in principle, there’s no reason you can’t trace through the hooks, and have the collectives inserted at the right locations in your graph.

The static_graph=False restriction no longer applies after Rohan Varma’s PR [DDP] multiple forward support for static graph by rohan-varma · Pull Request #103487 · pytorch/pytorch · GitHub.