What's the difference between `next_variable()` and `reconstruct()` in `IteratorVariable`

Hi team! I’m new to Dynamo and have a question.

I noticed that next_variable() and reconstruct() are not called in all cases,
so what’s the difference between them and when are they called?

Case 1 will call reconstruct():

def fn1():
    @torch.compile(backend="eager", fullgraph=True)
    def fn(x):
        x = x + 1
        return zip(range(3), [0, 1, 2])

    inputs = torch.ones(1)
    res = fn(inputs)
    print(list(res)) # [(0, 0), (1, 1), (2, 2)]

Case 2 will call next_variable():

def g(x):
    return x.requires_grad

@torch.compile(backend="eager", fullgraph=True)
def fn(inputs):
    out = inputs[0]
    for inp in filter(g, inputs):
        out = out * inp
    return out

input1 = torch.arange(2, dtype=torch.bfloat16)
input2 = torch.arange(2, dtype=torch.bfloat16).requires_grad_(True)
inputs = [input1, input2]

res = fn(inputs)
print(res) # tensor([0., 1.], dtype=torch.bfloat16, grad_fn=<MulBackward0>)

next_variable symbolically evaluates a call to next() on an iterator

reconstruct generates output bytecode to push a copy of the variable (in this case an iterator) onto the stack

The first one calls reconstruct because you are returning the iterator, so dynamo is forced to create an iterator at runtime. The second one can statically remove the iterator from your program since it is not visible outside the function.

1 Like

Wow thanks so much! Very helpful info!