How to check if I'm using expandable_segments?

Right now I’m using this tiny script to check:

import torch

def allocate_and_check(size_mb):
    """Allocate a tensor of given size in MB and return its data pointer"""
    num_elements = (size_mb * 1024 * 1024) // 4  # 4 bytes per float32
    tensor = torch.empty(num_elements, dtype=torch.float32, device='cuda')
    return tensor, tensor.data_ptr()

# Allocate 40 MiB
tensor_40, ptr_40 = allocate_and_check(40)

# Release the tensor
del tensor_40

# Allocate 60 MiB
tensor_60, ptr_60 = allocate_and_check(60)

# Check if data pointers are the same
print(f"40 MiB data pointer: {ptr_40}")
print(f"60 MiB data pointer: {ptr_60}")
print(f"Same memory region reused? {ptr_40 == ptr_60}")

PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" python test.py gives:

40 MiB data pointer: 42983227392
60 MiB data pointer: 42983227392
Same memory region reused? True

Do we have any built-in function in pytorch to know this?

In addition, how can we get the configs of expandable_segments?

since it uses cumem* API, I would assume there’s a max-size for the expandable_segments, i.e. the address range we allocate from the beginning. The expandable_segments is only expandable up to that size.