Custom TensorImpl and TorchDynamo

Suppose a backend uses a custom TensorImpl which stores some metadata (e.g. a hardware backend can support numerous memory layouts).

In eager mode, that metadata could be read by the op implementation for performing the computation appropriately (e.g. if layout==A, then use kernel A, else use kernel B)…perfect!

However, in graph mode, when Dynamo passes an fx.Graph to the custom compiler, it passes FakeTensor objects as example inputs. Since FakeTensors are constructed based on the base TensorImpl, the metadata isn’t copied over and the compiler doesn’t know what format it should expect.

How would you recommend I approach this problem for a backend compiler that requires the metadata?

I tried adding some hacks inside the logic that converts to FakeTensor but that doesn’t seem reliable or clean.

You need supporting fakeifying the custom TensorImpl. But it gets worse: all functions need to have updated meta functions which know how to propagate this metadata. We have meta functions for most operators… but that’s assuming a vanilla TensorImpl. You’ll have to do it all again for your special TensorImpl.