Can we add a default backend when OpenMP is not available?

Hi, team, when I try to test the torch.compile using the following code on my macbookpro:

import torch
from torch import nn


class BackboneModel(nn.Module):

    def __init__(self, *args, **kwargs) -> None:
        super().__init__(*args, **kwargs)
        self.conv1 = nn.Conv2d(6, 6, 6)
        self.bn1 = nn.BatchNorm2d(6)
        self.conv2 = nn.Conv2d(6, 6, 6)
        self.bn2 = nn.BatchNorm2d(6)
        self.conv3 = nn.Conv2d(6, 6, 6)
        self.bn3 = nn.BatchNorm2d(6)

    def forward(self, x):
        # this conv-bn pair can use efficient_conv_bn_eval feature
        x = self.bn1(self.conv1(x))
        # this conv-bn pair cannot use efficient_conv_bn_eval feature
        # because `self.conv2` is used twice
        x = self.bn2(self.conv2(self.conv2(x)))
        # this conv-bn pair can use efficient_conv_bn_eval feature
        # just for the first forward of the `self.bn3`
        x = self.bn3(self.bn3(self.conv3(x)))
        return x

model = BackboneModel()
model.eval()
input = torch.randn(64, 6, 32, 32)

opt_model = torch.compile(model)
output = opt_model(input)

I got tons of error outputs, seemingly to complain that omp.h is not found.

Can we add a default fallback backend to just return the forward function from fx.GraphModule when the chosen backend does not work?

It seems my compiler clang++ does not support openmp natively. I have to brew install libomp, and add g++ -Xclang -fopenmp -lomp -L/opt/homebrew/opt/libomp/lib -I/opt/homebrew/opt/libomp/include a.c -o a.out to compile a hello-world example from c - How to install OpenMP on Mac M1? - Stack Overflow .

How can I pass these information into the inductor backend?

cc @ezyang

This might be fixed by [inductor] Suport OMP on MacOS by nkflash · Pull Request #105136 · pytorch/pytorch · GitHub

Also, if the above PR isn’t what you want, if you set threads to 1:

Then the generated code shouldn’t use OpenMP.

Though in the single threaded case Inductor still might try to include the headers and link flags, which can be safely removed in that case since the generated code won’t need them. A PR to put if threads>1 around those bits would be welcome if the single-threaded-without-openmp use case is interesting to you.

@jansel Oh, yes, that PR can fix the issue.

I need to cherry-pick that PR code, and export OMP_PREFIX=/opt/homebrew/opt/libomp to make it work.

Is OMP_PREFIX a standard environment variable? I didn’t get this environment variable after brew install libomp. It would be better to document somewhere what MacOS users should do, whether to conda install llvm-openmp or specify OMP_PREFIX after brew install libomp.

Good suggestion, I added it to the issue: Dynamo + MacOS: fatal error: 'omp.h' file not found · Issue #95708 · pytorch/pytorch · GitHub