Hi,
I’m seeing that torch’s summation of a bfloat16 tensor is more accurate than my naive summation, e.g. x.sum() is better than x[0] +…+ x[n-1]. Could somebody please point me towards where in the repo is the code for torch’s summation? Thanks!
Hi,
I’m seeing that torch’s summation of a bfloat16 tensor is more accurate than my naive summation, e.g. x.sum() is better than x[0] +…+ x[n-1]. Could somebody please point me towards where in the repo is the code for torch’s summation? Thanks!