Hi team, I’m tracing 2D parallel graph with FakeTensor and FakeTensorMode, but it throws the below error
Exception has occurred: UnsupportedOperatorException
c10d.allreduce_.default
File "/usr/local/conda/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1404, in dispatch
r = func(*args, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/torch/_ops.py", line 437, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'c10d::allreduce_' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'c10d::allreduce_' is only available for these backends: [CPU, CUDA, PrivateUse1, SparseCPU, SparseCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
So, are there any solutions that can escape this error? Thanks!