What's the difference between torch.export / torchserve / executorch / aotinductor?

I’m watching and learning from pytorch conference 2023, when I encountered these concepts. It seems they are all related to deploy models.

To the best of my understanding:

The most confusing thing to me is AOTInductor. It seems to be more of a concept. I don’t know if it is a new backend, or a new repo, or a new functionality.

Can anyone help me out? It would be great to provide some concrete code to show what is AOTInductor.

And any other help for understanding torch.export/torchserve/executorch would also be great!

Might be relevant to @desertfire .

Hi Youkai, your understanding is pretty much all correct, only a few minor corrections.

  1. torch.export is not a function, instead it is a Python module containing torch.export.export (I know …) and other utilities that export needs.
  2. executorch always needs model from torch.export instead of “might use”
  3. I am a bit behind on updates on AoTInductor. It is a prototype now and it is a new functionality to run TorchInductor in Ahead-of-time fashion. I don’t think it will be a new repo. @desertfire will have the definitive answer here.
1 Like

Glad to see you are interested in AOTInductor. It is in a prototype stage and is being actively developed. AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models — PyTorch main documentation contains an example on how to use it. Please give it a try and let me know if you have any questions.

1 Like

Thanks, that’s clear! So AOTInductor’s entry point is torch._export.aot_compile, and it is part of the export system to export the model as a standalone .so file.

Thanks for the great talk at PyTorch Conf!

Quick question on the interplay between torch.script and AOTInductor – can you explain what the two tests here are testing? I.e., the difference between an AOT-compiled module that is scripted vs. one that is not, benefits / use-cases?