viewOp not getting functionalized for the LTC backend

If we invoke nn.groupnorm in the LTC backend, the LTC receives non-functionalized version of viewOp.
Looks like group_norm invokes viewOp here → pytorch/group_norm.cpp at 4b2f496eab073fe5d7d4979943c4df584e138d65 · pytorch/pytorch · GitHub

Does it look like a bug?

Also I have two questions
Q1) Where can I find the functionalization src code in pytorch.
Q2) Does the functionalization happens operator-by-operator or it gets applied on the complete graph?

Hi Rahul,

I think in terms of tracing math_groupnorm will give you a decomposition using batch_norm, which has consequences e.g. for the weight inputs among other things and it might be cleaner to add groupnorm to the natively supported ops in LTC by adding it to the ts_native_functions.yaml. My understanding is that this approach should also ensure it is functionalized correctly.

Regarding your other questions I recommend looking at Functionalization in PyTorch: Everything You Wanted To Know which provides a pretty good overview IMO.

Hope that helps.

David

PS: Something you could try if you just wanted to get past the problem with the view op is to use torch._enable_functionalization(reapply_views=True), which should turn it into an as_strided call, however, the other problems will still persist.

Hey!

By “non-functionalized vresion of viewOp”, do you mean that the view() call that you linked is not turned into a view_copy()? Functionalization can do potentially two things:

(1) (always) converts mutable ops into their out-of-place variants, e.g. foo_()foo()

(2) (optional) it can replace view ops with their cloning variants, e.g. view()view_copy(), which is what is enabled in LTC and XLA

If you have a code snippet that repro’s the issue you’re seeing that would be helpful to diagnose.

On your two questions:

Q1: Functionalization kernels are codegen’d, so they aren’t checked into source control. If you have a local build of pytorch, you can check out that functionalization kernels at build/aten/src/ATen/RegisterFunctionalizationEverything.cpp

Q2: You can logically think of it as being a graph transformation, but it does indeed happen operator-by-operator. Every time you execute an op that goes through the dispatcher, functionalization will redirect any inplace ops into out-of-place on the fly.

1 Like