What's the difference between torch.export / torchserve / executorch / aotinductor?

Thanks for clarification! As I understand now there will be three ways to bring pytorch models to c++:

  1. AOTI export .so file and load via runner (the same .so file can be loaded onto different devices like cpu, cuda or mps if I understand your last post right, am I correct)?
  2. Load .so file with torchscript (maintainance mode).
  3. Capture graph with TorchDynamo and export to torchscript and load torchscript file from c++ as usual (see this comment: What is the recommend serialization format when considering the upcoming pt2?)

Will this work on Windows and on MacOS as well?