To align more closely with Nvidia hardware support for each CUDA toolkit version and keep binary size under control, we’re updating our policy of which GPU architectures are supported for each build.
Starting with PyTorch Release 2.8, you want to be more careful about which GPU you are running on when choosing which version of the CUDA toolkit to use. We are removing Maxwell and Pascal GPU support from CUDA-12.8 binaries and reintroducing the deprecated Volta GPU support.
See the table below for PyTorch 2.8 support Matrix.
You can refer to this to know which architecture a given card is using.
Please see this issue for more context, and raise questions and concerns there.
Cheers,
Team PyTorch