[GUIDE] Getting C++ custom ops to work with torch.compile

If you have a library of existing C++ custom ops that work with PyTorch, they may not work with torch.compile. Here’s our guide on how to get them to work: The C++ Custom Operators Manual - Google Docs .

We’re still looking into ways to make the experience better but would appreciate feedback from users.


Hi Richard, many thanks for the guidance! Wondering if there is a easier way to handle a broader case, say that there is some function/module/custom op that could cause issue with torch.compile.
Currently Dynamo offers API torch._dynamo.allow_in_graph in case there is something that user does not want trace through. However with this API Dynamo will still need to run the leaf node with fake tensor, there is an issue for it Torch Dynamo allow_in_graph doesn't capture the custom function in graph · Issue #97295 · pytorch/pytorch · GitHub.
Would it be possible to change the torch._dynamo.allow_in_graph API to record the leaf node output signature as well? Something like

torch._dynamo.allow_in_graph(ModuleSkipTrace, example_output=torch.rand(4,4))

So during the wrapping FX proxy we can directly provide example value to skip using fake tensor to run the node?

I believe you’re looking for Black box a Python function for use with torch.compile - Google Docs

Do we have a timeline for this feature and can we also black box a module in the same way?