Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

autograd.py 1.9 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
  1. #%%
  2. import torch
  3. print(torch.__version__)
  4. #%%
  5. tensor = torch.Tensor([[3, 4], [7, 5]])
  6. tensor
  7. #%%
  8. tensor.requires_grad # False by default, computations for this tensor won't be tracked
  9. #%%
  10. tensor.requires_grad_() # implicit enable of tracking computations for this tensor to calculate gradients on the backward pass
  11. tensor.requires_grad
  12. #%%
  13. print(tensor.grad) # accumulates the gradient of the computations w.r.t. this tensor after the backward pass which was used to caluclate the gradients
  14. #%%
  15. print(tensor.grad_fn) # no backward pass performed yet
  16. #%%
  17. out = tensor * tensor
  18. #%%
  19. out.requires_grad # derived from the original tensor
  20. #%%
  21. print(out.grad) # still no gradients
  22. #%%
  23. print(out.grad_fn) # exists because it holds the result of a computation (the mulitiplication) which required gradients, it therefore has a gradient function associated with it
  24. #%%
  25. print(tensor.grad_fn) # no grad_fn because this tensor is NOT the result of any computation
  26. #%%
  27. out = (tensor * tensor).mean()
  28. print(out.grad_fn)
  29. #%%
  30. print(tensor.grad) # still no gradients associated with the original tensor
  31. #%%
  32. out.backward() # compute the gradients w.r.t. the "out" tensor
  33. #%%
  34. print(tensor.grad) # now it exists!
  35. #%%
  36. new_tensor = tensor * tensor
  37. print(new_tensor.requires_grad) # if the tensors in the computation have requires_grad=True the computed output will as well
  38. #%%
  39. with torch.no_grad():
  40. # stop autograd from tracking history on newly created tensors
  41. # we require gradient calculation in training phase
  42. # we turn off calculating gradients when predicting
  43. new_tensor = tensor * tensor
  44. print('new_tensor = ', new_tensor)
  45. print('requires_grad for tensor', tensor.requires_grad) # True; required gradient calculation
  46. print('requires_grad for new_tensor', new_tensor.requires_grad) # False; does not require gradient calculation because of the torch.no_grad() block we are in
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...