As part of torch::deploy, we’ve been prototyping a way of loading multiple Python interpreters into a single process using a custom shared library loader as a way to get around the global interpreter lock. With this approach many C extension libraries such as Numpy work out of the box.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Torch::deploy - The Build | 1 | 2499 | May 25, 2021 | |
| Automatic out-of-tree backend loading | 1 | 829 | December 13, 2021 | |
| State of PyTorch core: September 2021 edition | 1 | 9442 | September 21, 2021 | |
|
PyObject preservation has landed
|
0 | 727 | June 4, 2021 | |
|
Cpp_api_parity failure: _pickle.UnpicklingError: pickle data was truncated
|
1 | 816 | March 27, 2021 |