Hello, I’m working on a an extension module that adds operations for geometric algebra.
This operations usually require some metadata that is essential for them.
I have built a python extension module where all of these operations reside but they are defined for a default data type.
I wanted to be able to initialize my module with some parameters which define my algebra and which selects a set of functions to use and also generates the metadata I need.
The extension module I’ve built already does that and it can work with different algebras. And a bunch of different data types. The data types I have are sparse multivectors which are similar to sparse matrices where they have a value and a corresponding bitmap (it encodes the basis blades of the algebra) that goes from 0 to the size of the algebra.
I’m having difficulty on how I would integrate everything in a pytorch c++ extension module.
Hello,
I’ve followed some of those tutorials, that one included, and I was able to write a CUDA and a C++ kernel for the operations I needed.
I guess I’ll have to write things from scratch to be able to work with the C++ API or I could try to convert tensors to the types I’m using.
Also will try to use sparse tensors to do computations.
There are some things that I can directly port as the algebra generation module.
But I’ve still have to figure out how to from a chosen algebra chose the best kernels. I want to abstract the user on the implementation of the kernels.
My ideia is to have a bunch of kernels, CUDA and C++, which are chosen depending on the algebra that is chosen. I want to have a generic kernel that works with sparse and dense multivectors. I want to have code generated kernels that are computationally less expensive for smaller algebras.
Essentialy writing a dynamic module that does all the kernel choices for you.
Maybe I have to ask the question in a different way.
I have a bunch of kernels. I’m going to put this kernels in tables of kernels. So I will have a table for each type implementation. I want to be able to select these tables based on the input of the user.
For example if I start my python module with ga = GA(dtype='sparse') I want ga to have all the kernels available for that type implementation. Which means that I could call PyTorch modules from ga, that is ga.ReLu(x) or ga.multivector([1,2,3,4]). And operations that I do with objects which I create with this module should call the selected kernel, that is
x = ga.multivector([1,2,3,4])
y = ga.multivector([5,6,7,8])
y*x # This should call the selected kernel
I’m not sure what the question is? But if you want Tensor objects that have a different behavior from the vanilla ones. I would suggest to define a subclass of Tensor that does what you want. Some partial doc on this is at Extending PyTorch — PyTorch 2.0 documentation