Is there a plan for FX in a C++ IR?

Hi,
Is there a way to consume the FX graphs directly from C++? As per How to get the computation graph and pass it to C++? - #2 by James_Reed - FX (Functional Transformations) - PyTorch Forums, there was no plan for this from PyTorch but the thread is about 3 years old, hence wanted to raise the question here.

The motivation is for a midlayer torch.compile backend with its own graph format that isn’t exposed to python side. If the backend wants to convert an FX graph to its own internal graph format, it needs to do this from the python FX graph today. However, the conversion to backend specific graph would require the backend graph construction to to be exposed to the python side. It can also convert the FX to an intermediate C++ IR like TorchScript that can be lowered from C++ side by the backend, but TorchScript itself is in maintenance mode and its not sure if it would support all torch.compile current and future artifacts in its IR (symbolic values, for example) and continue to improve/support them.

Is there any other way a backend can convert the FX graph to its own graph, from C++ level? Ideally, if there is an FX equivalent graph in C++, then this would become possible. Alternately, is there any serialized form of FX graph (that is meant for training as well as inference), so that the python layer can serialize FX to this form and C++ can consume it from the serialized file?

2 Likes

Not for FX itself; but for ExportedProgram produced by torch.export.export; you can save it with torch.export.save (torch.export — PyTorch 2.2 documentation). The saved artifact will contain the FX graph inside of ExportedProgram in JSON format following this schema: pytorch/torch/_export/serde/schema.yaml at main · pytorch/pytorch · GitHub

Will this be a replacement for TorchScript?