What is the reason for using old ABI on Linux?

I’m developing OpenCL backend. Till now I used custom build and for some reason it had CXX11_ABI set to 1 - probably due to custom build, now trying work agaist latest pytorch with better out of tree backend support and use pytorch from pip directly I found that it is all build with -D_GLIBCXX_USE_CXX11_ABI=0

That means that each and every C++ library I want to use in backend must be compiled with old ABI - including any 3rd party C++ libraries I may want to use.


  1. What is the reason to use Old ABI?
  2. Are there plans to change it, since it breaks compatibility with virtually any C++ library around?



I don’t remember why it’s the case by now. It’s possible that we used to target old compiler/systems, but it should be gone by now. @malfet might be able to comment more

Some related information here: Status of pip wheels with _GLIBCXX_USE_CXX11_ABI=1 · Issue #51039 · pytorch/pytorch · GitHub
I.e. as long as we have to support CentOS 7/PEP-599, we have to release wheels with old ABI (as new is just not available on that ancient OS)

Please note, that conda does not have to abide by that ancient standard, and conda packages likely already be released with CXX-11 ABI support

I don’t know how many users of CentOS7 based distros are there, but we can try to switch nightlies to manylinux_2_24 and target next release to use those. Or perhaps release both 2_24 and manylinux2014 (aka manylinux_2_17) side by side

1 Like

Thanks a lot for the relevant pointers.

At this point in dlprim_core - the core deep learning API I use in OpenCL backend (like cuDNN for OpenCL) I don’t use any external C++ libraries so I can workaround by building without C++11 ABI. If the situation changes I’ll watch what happens in this issue