Estimate theoritical FLOPs of backward pass of a DNN

Hi, as a part of my research, I wanted to estimate the total FLOPs of a DNN theoretically as the total number of multiplication operations performed in the network during forward pass and backward pass.

I have used Lenet5 as my model for simplicity. I’m able to understand the total number of flops for forward pass. The formula for forward pass (assuming bs to be the batch_size of the input) is as follows :

  1. Fully connected layer of x input neurons and y output neurons = x * y * bs
  2. Conv2d layer of input size (a,b,c) and filter size (f1,f2,f3) and conv_output size (x,y,z) = bs * (f1*f2*f3) * num_filters * (x*y)

I’m unable to come up with a similar formula like above for the backward pass.

I also looked at some of the tools like : Horace's flop counter, but with flops metric fixed correctly · GitHub to calculate FLOPs empirically. According to the tool, for backward pass, the formula roughly looks like below :

  1. Fully connected layer of x input neurons and y output neurons = 2 * x * y * bs
  2. Conv2d layer of input size (a,b,c) and filter size (f1,f2,f3) and conv_output size (x,y,z) = Unable to figure out !

But, I’m still unable to figure out how many FLOPs is required for a conv2d or a fully connected layer in the backward pass. So, would be very helpful if anyone can help me with a formula like the above to calculate the exact number of FLOPs for backward pass.

Thanks a lot for your time !

This sort of question may be better suited for the PyTorch forums: https://discuss.pytorch.org

2 Likes