I’m using PyTorch on a project that uses pre-trained AI models. The previous version was 2.2.2, so I updated to version 2.4.1 (current) due to the Black Duck scan reporting that the library was a high-security risk, due to a vulnerability found in the software. Even with the update, the vulnerability still exists.
" Description
PyTorch is vulnerable to remote code execution (RCE) via command injection within the torch.distributed.rpc framework. An attacker could exploit this in order to remotely attack master nodes that are starting distributed training."
Is there any action that can be taken to mitigate this security vulnerability?
as Nikita mentioned, torch.distributed features should only be used in a trusted environment (as covered in SECURITY.md), so while Black Duck is doing its job, if you are in a trusted environment, you can just use it as-is.
I added a comment mentioning this discussion and the SECURITY.md to the code alert and the Black Duck Rollout team removed the security risk flag from the PyTorch package.