Is there any plan to support FP8 as a datatype to PyTorch?
there’s no immediate plan to support this in the quantization workflow. also don’t think there’s any immediate plan to support this as an unquantized type
1 Like