audograd
See
- FAQ:
pack and unpack hooks for saved tensors. See https://pytorch.org/docs/stable/notes/autograd.html#hooks-for-saved-tensors
_saved_self, _saved_other, _saved_result, _saved_exponent
x*2
, multiply with a scalar
x = torch.tensor([1.2, 3], requires_grad=True)
y = x*2
print(y.grad_fn.__class__) # '<MulBackward0>'
print(y.grad_fn._saved_self) # None
print(y.grad_fn._saved_other) # tensor(2)
x * a
, multiply with a tensor that requires grad
x = torch.tensor([1.2, 3], requires_grad=True)
a = torch.tensor([10.2], requires_grad=True)
y = x*a
print(y.grad_fn.__class__) # '<MulBackward0>'
print(y.grad_fn._saved_self) # tensor([1.2, 3], requires_grad=True)
print(y.grad_fn._saved_other) # tensor([10.2], requires_grad=True)
print(y.grad_fn._saved_self is x) # True
print(y.grad_fn._saved_other is a) # True
x * a
, multiply with a tensor that does not require grad
x = torch.tensor([1.2, 3], requires_grad=True)
a = torch.tensor([10.2])
y = x*a
print(y.grad_fn.__class__) # '<MulBackward0>'
print(y.grad_fn._saved_self) # None
print(y.grad_fn._saved_other) # tensor([10.2])
print(y.grad_fn._saved_other is a) # True
sin(x)
x = torch.tensor([1.2, 3], requires_grad=True)
y = x.sin()
print(y.grad_fn.__class__) # '<SinBackward0>'
print(y.grad_fn._saved_self is x) # True
x.relu
x = torch.tensor([1.2, 3], requires_grad=True)
y = x.relu()
print(y.grad_fn.__class__) # '<ReluBackward0>'
print(y.grad_fn._saved_result.equal(y)) # True
print(y.grad_fn._saved_result is y) # False
x.log()
x = torch.tensor([1.2, 3], requires_grad=True)
y = x.log()
print(y.grad_fn.__class__) # '<LogBackward0>'
print(y.grad_fn._saved_self is x) # True
Example 1
Where is Mulbackward0 defined
tools/autograd/derivatives.yaml
See https://github.com/pytorch/pytorch/blob/main/tools/autograd/derivatives.yaml#L1197
After building PyTorch, it generates several files
torch/csrc/autograd/generated/Functions.h
torch/csrc/autograd/generated/Functions.cpp