Partial graph allocation for accelerators

Hello, I have a question about integrating custom backend compilers into PyTorch.

Suppose I have an accelerator, and my compiler for this accelerator only supports a limited number of operations, such as convolutions and ReLUs.

Is it possible to allocate partial FX graphs that only include convolutions and ReLUs to my backend compiler, while allocating the remaining partial FX graphs to other compilers like TorchInductor for CPU execution?

1 Like

Here’s an example of partial graph delegation.

1 Like

This is what I wanted to know. Thank you so much!

You can also find the example usage in