Is there a place for storing custom data within PyTorch Tensor

Hi,
Is there a place within the PyTorch C++ tensor object (which is based on TensorImpl class) to store some custom data? It could be a “void*” blob which will be only used if a backend wants to keep some specific data for a tensor.

While it is possible to have a separate map in the backend for keeping this kind of information, having it within the tensor would make it more efficient to store and retrieve.
Regards,
Sujoy

This is loosely answering the question but we do have a strict 1-1 match between the TensorImpl and the python Tensor object. So if that works for you, you can store any custom data you want on the python object (as attributes) and that will just work!

We don’t have any extra blob field that I’m aware of in C++ though.

If you have your own “DispatchKey” for your tensor, you can create your descendant of TensorImpl with additional fields. See Extending dispatcher for a new backend in C++ — PyTorch Tutorials 1.13.1+cu117 documentation for some pointers.

Examples of doing that is SparseTensor: pytorch/SparseTensorImpl.h at master · pytorch/pytorch · GitHub

We also have helper class OpaqueTensorImpl in case you want to completely hide backend implementation, e.g. pytorch/MetalTensorImpl.h at master · pytorch/pytorch · GitHub

Just curious, what kind of information are you trying to attach?

1 Like

Using a TensorImpl subclass works, but it has issues with shallow copy. While doing weight sharing, the shallow copy from a CPU TensorImpl to a backend subclass doesn’t work.

We have a backend specific data layout for physical device memory of the tensors in some cases. PyTorch natively supports NCHW and channel last NHWC, but we want to store a different layout information for these tensors and use this information within the backend. This is an example information that we would like to keep within the C++ Tensor object.

Regards,
Sujoy

We could set up some sort of linked list or hash map in ExtraMeta which is accessible from TensorMeta. Most of the trouble is I am not sure what the most appropriate data structure for this is.

Hi,
Just a super basic proposal that would fill the bill for our backend and hopefully push the discussion.

  1. struct BackendMeta with a virtual destructor that is intended for overloading by a backend.
  2. extra_meta_.backend_meta_ field. Per our needs ideally is a shared_ptr (or an intrusive equivalent).
  3. A setter and getter in the TensorImpl that reaches to the extra_meta_.backend_meta_

@ezyang can you elaborate why map?

If you’re willing to subclass the ExtraMeta, you might as well just subclass TensorImpl. The whole point of the map is to avoid having to subclass.

Sure, my description was not clear. I meant that an intrusive pointer to BackendMeta would be a new field in ExtraMeta. And the intention is that backends inherit BackendMeta producing a type unknown to the framework that contains whatever additional attributes are needed.

But you’ve mentioned some sort of linked list or hash map, which, I am guessing, means ability to store multiple additional attributes. Why such design?

Please review the PR 97429 with my proposal.
In short the idea is that since there are downsides to overriding TensorImpl, that PR introduces BackendMeta which is intended for overriding by backends. The contract would be that for the framework the actual implementation is always opaque.