I’m watching and learning from pytorch conference 2023, when I encountered these concepts. It seems they are all related to deploy models.
To the best of my understanding:
torch.exportis a functionality in pytorch, to export a computation graph to some pre-defined format. This corresponds to the function
torchserveis a framework to serve pytorch models at scale. It might use models from
torch.export. This corresponds to a repo GitHub - pytorch/serve: Serve, optimize and scale PyTorch models in production .
executorchis another framework to run pytorch models for mobile/edge devices. It might use models from
torch.export. This corresponds to a repo GitHub - pytorch/executorch: End-to-end solution for enabling on-device AI across mobile and edge devices for PyTorch models .
- AOTInductor, from the talk https://www.youtube.com/watch?v=w7d4oWzwZ0c and slides , it seems to be an ahead-of-time version of inductor.
The most confusing thing to me is AOTInductor. It seems to be more of a concept. I don’t know if it is a new backend, or a new repo, or a new functionality.
Can anyone help me out? It would be great to provide some concrete code to show what is AOTInductor.
And any other help for understanding
executorch would also be great!