Slides from Structured Kernel presentation

I gave an internal talk on Structured Kernels, a new way of writing kernels in PyTorch. Posting the slides here: Structured Kernels - Google Slides.

Also, check out the actual RFC, which contains a more detailed version of everything in the slides! rfcs/RFC-0005-structured-kernel-definitions.md at rfc-0005 · pytorch/rfcs · GitHub

3 Likes

I love the structured kernel approach and the separation (and larger availability) of the meta information!

One question that is half-related here is the vision for backwards.

  • Would we also use structured kernels for backwards? It’s in the future because I imagine a world where autodiff + JIT would be able to lower a generated backward to use inplace where it doesn’t need the intermediates.

  • One step further might be to also have meta-information relating to the backward (will it be native or from derivatives, which outputs will have gradients etc.), but this is likely out of scope here.

Both of these may be far in the future but as an implementor, “yes” or “no” for using structured kernels for backward ops is a question that I have.

Best regards

Thomas

I think having backwards information available in some reusable form makes sense; for example, FX has a problem where they can’t actually differentiate graphs because the information is not available. However, derivatives sort of are in “better” shape, because they are already specified separately (derivatives.yaml), compared to shape checking which historically was interleaved with all kernels. It’s definitely worth pursuing though.

Is there an issue listing about current implementing status?
I notice that it is under active development, but an issue listing TODO-ops would helpful.

Lucky you, I made one recently: Port kernels to be structured [tracker] · Issue #55070 · pytorch/pytorch · GitHub

2 Likes