Zero-copy transfer of data between PyTorch and OpenGL on GPU by including “OpenGL interoperability” from CUDA in pytorch.
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the output. Right now transfer of data between PyTorch and OpenGL is a problem for both training and inference.
Without any additional packages i can copy data from PyTorch CUDA to CPU and then back to OpenGL on GPU, this is very simple but slow.
I can instead use some cuda bindings for python and a separate CUDA Toolkit installation to avoid the data transfer but this is quite complex and there are many competing ways and tools for doing this which makes it hard to navigate.
The main challenge for these kind of bindings is making a simple ownership system.
But if you have a simple case where you just want to share the data, you should be able to rewrap one way or the other by grabbing the data_ptr() and size/stride/dtype metadata from one and creating an object on the other side with it.
I would recommend reading this file that contains the two functions converting to and from numpy which will look exactly the same as an OpenGL equivalent: pytorch/torch/csrc/utils/tensor_numpy.cpp at main · pytorch/pytorch · GitHub