Is there a place within the PyTorch C++ tensor object (which is based on TensorImpl class) to store some custom data? It could be a “void*” blob which will be only used if a backend wants to keep some specific data for a tensor.
While it is possible to have a separate map in the backend for keeping this kind of information, having it within the tensor would make it more efficient to store and retrieve.
This is loosely answering the question but we do have a strict 1-1 match between the TensorImpl and the python Tensor object. So if that works for you, you can store any custom data you want on the python object (as attributes) and that will just work!
We don’t have any extra blob field that I’m aware of in C++ though.
Using a TensorImpl subclass works, but it has issues with shallow copy. While doing weight sharing, the shallow copy from a CPU TensorImpl to a backend subclass doesn’t work.
We have a backend specific data layout for physical device memory of the tensors in some cases. PyTorch natively supports NCHW and channel last NHWC, but we want to store a different layout information for these tensors and use this information within the backend. This is an example information that we would like to keep within the C++ Tensor object.
Sure, my description was not clear. I meant that an intrusive pointer to BackendMeta would be a new field in ExtraMeta. And the intention is that backends inherit BackendMeta producing a type unknown to the framework that contains whatever additional attributes are needed.
But you’ve mentioned some sort of linked list or hash map, which, I am guessing, means ability to store multiple additional attributes. Why such design?