As part of torch::deploy, we’ve been prototyping a way of loading multiple Python interpreters into a single process using a custom shared library loader as a way to get around the global interpreter lock. With this approach many C extension libraries such as Numpy work out of the box.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Torch::deploy - The Build | 1 | 2348 | May 25, 2021 | |
TorchDynamo Update 3: GPU Inference Edition | 12 | 6122 | February 2, 2023 | |
TorchDynamo: An Experiment in Dynamic Python Bytecode Transformation | 7 | 15982 | March 9, 2023 | |
The future of C++ model deployment
|
7 | 2379 | December 28, 2023 | |
Python Operator Authoring w/ NNC | 5 | 2164 | June 7, 2022 |