As part of torch::deploy, we’ve been prototyping a way of loading multiple Python interpreters into a single process using a custom shared library loader as a way to get around the global interpreter lock. With this approach many C extension libraries such as Numpy work out of the box.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Torch::deploy - The Build | 1 | 2507 | May 25, 2021 | |
| Automatic out-of-tree backend loading | 1 | 841 | December 13, 2021 | |
| What (and Why) is __torch_dispatch__? | 3 | 15304 | July 2, 2024 | |
| State of PyTorch core: September 2021 edition | 1 | 9471 | September 21, 2021 | |
| Possible to use custom backend to create TensorImpl that allows custom datatype? | 2 | 486 | April 11, 2025 |