Lazy Tensor Core

Lazy Tensors in PyTorch is an active area of exploration, and this is a call for community involvement to discuss the requirements, implementation, goals, etc.

We are looking for ways to bring compiler optimizations to a wider range of PyTorch programs than can be easily compiled via torchscript, and provide a better self-service path for accelerator vendors (esp. training) to integrate their own graph-based stacks. We take inspiration from (github / pytorch/xla), which uses lazy tensors to target XLA, and through the work of Alex Suhan and team are prototyping ways to generalize this and see if it can be a useful part of core PyTorch.

Some open areas of discussion right now that we’d be interested to hear your perspective on:

  • What is the right user facing API for a core lazy tensor?

** List item

Relevant links:
see this RFC #18 for more details and discussion,
see Alex Suhan’s Lazy Tensor prototype and also find links there to the XLA LTC backend.