TorchDynamo Update 10: Integrating with PyTorch/XLA for Inference and Training

Improper handling of torchxla fallback would cause both correctness issue and perf issues

  • for correctness, torchxla fallback may cause the graph we saved for future execution not representing the behavior of the original graph we compiled/optimized
  • for performance, we do see that extra graph breaks slow down performance. However for torchxla fallback, it’s not that big issue since baseline should also be slowdown due to the fallback.

As Jack also mentioned, we have some ideas and can follow up on this.

2 Likes