Implementing OpenCL backend for pytorch

In my view, the difficulty in bootstrapping a new architecture is twofold:

  • PyTorch has quite some infrastructure (the dispatcher, see e.g. ezyang’s blog post and podcast),
  • the operator coverage (where you have a head start, apparently).

You likely want to tackle the first before the second becomes your main problem.

If you wanted to pull this off (it will be quite some undertaking), you could start with your own build and do

torch.ones(5,5, device="opencl")

This gives you

RuntimeError: 0INTERNAL ASSERT FAILED at "../c10/core/TensorOptions.h":655, please report a bug to PyTorch. This is a grandfathered Caffe2 device type opencl, it shouldn't ever convert to a DispatchKey.  File a bug describing what you were doing if you think this is in error.

After you fix this, you’ll likely bump into the next. :slight_smile: You could also take inspiration from the more recent vulkan backend (which, as far as I understand, is special purpose, but recently and also eyes APUs etc.).

This would be the first thing to resolve. Once you have that, you could tackle simple ops (y = x1 + x2).
Quite likely the autograd support might not be as device dependent (but I didn’t try, obviously).

Best regards

Thomas

P.S.: Disclaimer: I am just a random person on the internet and not anyone who has any say whether PyTorch would accept an OpenCL backend if it were there.

1 Like