Hi
I am n00b in CUDA and have questions.
Is there a way to
- activate CUcontext from PyTorch? (perhaps just sending a dummy Tensor to CUDA would do?)
- proper way to retrieve the flag used to activate CUcontext by PyTorch?
Background:
Following Performance of TorchAudio's GPU Video Decoder, I am looking for a way to share CUcontext (CUDA primary device context? IIUC) between PyTorch and FFmpeg.
After messing with FFmpeg API, I found there is a flag to tell FFmpeg to use primary device context. FFmpeg: libavutil/hwcontext_cuda.c Source File
When I tried it, there were three mode of operations
- Success with CUcontext shared confirmed from reduced memory usage.
- Fail with message like there is no active primary device context.
- Fail with message like the flag I gave (default) is different from the active primary device context.
So I am thinking to do the following;
- If PyTorch has not activated the device primary context, activate it within PyTorch.
- Retrieve the context parameter and tell FFmpeg to use the same flag for creating CUcontext.
What is the way to do this with PyTorch C++ API?
Thanks,