Hi,
I’m in the process of writing a backend for pytorch and I’m testing that out with this pytorch code
import torch
tensor = torch.rand(3,4, device=“mydevice”, dtype = torch.float)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
print(tensor)
I have done the scaffolding for device and implemented copy_from and allocate_empty ops for allocating and copying tensors from/to CPU. They all work as expected. But when I register the abs_out op (print(tensor) calls it) I get an undefined tensor in the ‘out’ argument e.g.
Tensor& abs_out(const Tensor & self, Tensor & out)
{
std::cout << "abs called " << std::endl << "out tensor:: " << out << std::endl;
return out;
}
I see the following with the python code.
abs called
out tensor:: [ Tensor (undefined) ]
Does anyone know what I’m missing here? Is there a documentation of the operator arguments apart from the schema in RegistrationDeclaration.h?
Thanks for your help!