Difference between __torch_function__ and __torch_dispatch__

What is the difference between the two design goals?

__torch_function__ solves the problem of “I can overload the meaning of tensor.add() with Python duck typing, but I can’t overload the meaning of torch.add(tensor) because this isn’t a method call.” It lets you define a method which lets you overload the meaning of torch.* API calls. It works even if you don’t subclass Tensor.

__torch_dispatch__ solves the problem of “PyTorch has a big pile of C++ code which implements important subsystems like autograd, and I can’t interpose on it.” It offers a callback into Python after these subsystems have been processed. A torch dispatch Tensor subclass (these must be Tensor subclasses) can, for example, change the behavior of operations called by the autograd engine.


You can also see more detailed (though not 100% finished yet) documentation at Extending PyTorch — PyTorch main documentation