No CPU backend in triton

I’m reproducing a test failure in my pull request, https://hud.pytorch.org/pr/pytorch/pytorch/139141#33075326227, a triton failure in CPU backend.
BUt my triton doesn’t have CPU backend, so I couldn’t test it.
Here’s CPU backends my triton supports:
‘nvidia’: Backend(compiler=<class ‘nvi.CUDABackend’>, driver=<class ‘nvi.CudaDriver’>), ‘amd’: Backend(compiler=<class ‘.HIPBackend’>, driver=<class ‘.HIPDriver’>)}

How to reproduce this issue?

I found an experimental triton version with CPU backend, GitHub - triton-lang/triton-cpu: An experimental CPU backend for Triton, guess this is the new version, will try this version first.

The Triton CPU backend is currently out of tree (not included with default Triton) and should be installed from GitHub - triton-lang/triton-cpu: An experimental CPU backend for Triton

Is there any introduction for using triton-cpu as the backend of inductor?
I mean usually there are some docs or other materials for triton-gpu in dynamo, but triton-cpu may be a new feature with only PRs in pytorch repo.
I’d like to explore how those developers introduce triton-cpu into inductor
@jansel @zero000064

cc @int3 who might be able to fill in more details.

Once you have installed the CPU version of triton (linked above), you can get inductor to use it with:

torch._inductor.config.cpu_backend = "triton"

or torch.compile(options={"cpu_backend": "triton"}) (make sure you are using the latest nightly build)

You can verify it is working with TORCH_LOGS=output_code

Note the backend is still experimental and may have some bugs.

2 Likes