As part of torch::deploy, we’ve been prototyping a way of loading multiple Python interpreters into a single process using a custom shared library loader as a way to get around the global interpreter lock. With this approach many C extension libraries such as Numpy work out of the box.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Torch::deploy - The Build | 1 | 2481 | May 25, 2021 | |
| Automatic out-of-tree backend loading | 1 | 819 | December 13, 2021 | |
| State of PyTorch core: September 2021 edition | 1 | 9427 | September 21, 2021 | |
| PyObject preservation has landed | 0 | 719 | June 4, 2021 | |
| Cpp_api_parity failure: _pickle.UnpicklingError: pickle data was truncated | 1 | 803 | March 27, 2021 |