Why Dynamo fails to capture the computation graph in this function?

I’m trying to understand which kind of graphs can Dynamo capture.

from typing import List
import torch
from torch import _dynamo as torchdynamo
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
    print("compiler called")
    return gm.forward

def toy_example(x):
    a = 3
    except RuntimeError as e:
        a = 2
        a = 5
        a *= 2
    return (x + a) * a - a

opt_toy_example = torchdynamo.optimize(my_compiler)(toy_example)

from torch._dynamo.eval_frame import _debug_get_cache_entry_list
caches = _debug_get_cache_entry_list(opt_toy_example._torchdynamo_orig_callable.__code__)
cache = caches[0]
guard, code = cache.check_fn, cache.code
code.co_code == toy_example.__code__.co_code # prints True

After running this code, I got one cache entry, but the code in the cache entry is the same as the uncompiled function. However, there is actually a subgraph (x + a) * a - a that can be captured. Why does Dynamo fail to capture the graph?

A graph break in a try/except block triggers a fallback to eager, because generating resume functions is more complex. This would be possible to support, but not currently implemented.