Supporting new dtypes in PyTorch

Thanks for the great summary. What are the current thoughts on adding more complex dtypes? cuFFT 12.3, for example, now offers a CUDA_C_16BF dtype, which would be equivalent to our current complex32, but with bfloat16s instead of float16s.
Has there been any discussion about how we could make it easier to add more complex types? Right now, it seems to me that every new dtype (bfloat16, maybe float8 in the future?) requires their own complex equivalent. Maybe we should make the complex dtype more composable for future subtypes.