(Editor’s note: I meant to send this in December, but forgot. Here you go, later than it should have been!)
The merged PR at Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] by voznesenskym · Pull Request #90039 · pytorch/pytorch · GitHub changes how Dynamo invokes backends: instead of passing real tensors as example inputs, we now pass fake tensors which don’t contain any actual data.
The motivation for this PR is in the dynamic shapes workstream. The essential problem is this: when compiling for dynamic shapes, you don’t want a backend compiler to specialize on the exact sizes the input tensors had. Instead, you want the compiler to parametric over the sizes, and perhaps only peek at the real size occasionally (introducing a guard), when it would really benefit from specializing to a specific size.
There is no way to enforce this if we pass real tensors to backends: the real tensors have all of the real shapes, and don’t say anything about what relationships the sizes have symbolically. However, if we pass fake tensors, we can also replace the concrete sizes with symbolic sizes. If you use those sizes without looking at their concrete sizes, you end up with a compiled result that can work with arbitrary choices of sizes; if you perform boolean tests on the sizes, we automatically introduce guards to limit the validity of the graph to whenever those guards are true.
There is also a minor side benefit to passing fake tensors: with real tensors, it’s easy to believe that to perform static analysis on the graph (e.g., shape propagation), you have to actually run the graph (with real tensor inputs). This is very slow (since you’re running the real tensor compute at compile time) and uses up a lot of memory. Fake tensors encourage you to make use of FakeTensorMode that allows you to run tensor computation, without actually doing any of the real work.
Hopefully, the changes needed to make things work with fake tensors are straightforward. Please ask any questions here if you have any!