As part of torch::deploy, we’ve been prototyping a way of loading multiple Python interpreters into a single process using a custom shared library loader as a way to get around the global interpreter lock. With this approach many C extension libraries such as Numpy work out of the box.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Torch::deploy - The Build | 1 | 2471 | May 25, 2021 | |
Automatic out-of-tree backend loading | 1 | 808 | December 13, 2021 | |
What (and Why) is __torch_dispatch__? | 3 | 14621 | July 2, 2024 | |
State of PyTorch core: September 2021 edition | 1 | 9408 | September 21, 2021 | |
Possible to use custom backend to create TensorImpl that allows custom datatype? | 2 | 453 | April 11, 2025 |