Hi everyone,
I’ve been looking into the current state of Named Tensors. Despite the clear UX benefits for model readability and error prevention, the feature has remained in an experimental state with a growing list of open issues.
I am interested in contributing to this module to help move it toward a more stable release. Specifically, I see a gap in how named tensors integrate with high-level ops like einsum or modern functional transformations.
The Vision: Named tensors should allow for a more “self-documenting” interface. For example, instead of tracking dimensions by index, we could leverage names directly in contractions:
Python
# Proposed interaction style
As = torch.randn(3, 2, 5, names=('Batch', 'Width', 'Height'))
Bs = torch.randn(3, 5, 4, names=('Batch', 'Height', 'Depth'))
# Instead of: torch.einsum('bwh, bhd -> bwd', As, Bs)
# We leverage the existing names for safety:
output = torch.einsum_named(As, Bs, out_names=('Batch', 'Width', 'Depth'))
My Questions for the Maintainers:
-
Priority: Is there still an appetite for first-class Named Tensor support, or is the consensus to let community libraries (like Einops) handle dimension naming?
-
Roadblock: Are there specific architectural blockers (e.g., interaction with
torch.compileor Autograd) that led to the current stagnation? -
Onboarding: I am prepared to start by triaging and fixing the “module: named tensor” labeled issues. Would the team prefer a series of small bug-fix PRs, or a broader RFC on the future of the API?
I’d love to hear from anyone who has worked on the core implementation to understand where I can be most helpful.