torchchat - Running LLMs locally
With torchchat, you can run LLMs using Python
You will find the project organized into three areas:
- Python: Torchchat provides a REST API that is called via a Python CLI or can be accessed via the browser
- C++: Torchchat produces a desktop-friendly binary using PyTorch’s AOTInductor backend
- Mobile devices: Torchchat uses ExecuTorch to export a .pte binary file for on-device inference
Please use Issues · pytorch/torchchat to report bugs and other issues with torchchat.