Zero-copy transfer of data between PyTorch and OpenGL on GPU by including “OpenGL interoperability” from CUDA in pytorch.
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the output. Right now transfer of data between PyTorch and OpenGL is a problem for both training and inference.
Without any additional packages i can copy data from PyTorch CUDA to CPU and then back to OpenGL on GPU, this is very simple but slow.
I can instead use some cuda bindings for python and a separate CUDA Toolkit installation to avoid the data transfer but this is quite complex and there are many competing ways and tools for doing this which makes it hard to navigate.
The main challenge for these kind of bindings is making a simple ownership system.
But if you have a simple case where you just want to share the data, you should be able to rewrap one way or the other by grabbing the data_ptr() and size/stride/dtype metadata from one and creating an object on the other side with it.
I would recommend reading this file that contains the two functions converting to and from numpy which will look exactly the same as an OpenGL equivalent: pytorch/torch/csrc/utils/tensor_numpy.cpp at main · pytorch/pytorch · GitHub
Hi, I stumbled upon that post while looking for a way to manage visualization of ML weights without copying huge tensors. I worked from this cupy ↔ GL example to get…
… to that file that does display as an OpenGL texture the content of a PyTorch Tensor without making any copy:
The main issue is that you have to initialize an OpenGL buffer as you are allocating the tensor, you can’t directly access an already allocated tensor, but that’s a good start.