Skip to content

Fix inconsistencies in polynomial autograd tutorial#3899

Open
gtsitsik wants to merge 1 commit into
pytorch:mainfrom
gtsitsik:polynomial_autograd_fixes
Open

Fix inconsistencies in polynomial autograd tutorial#3899
gtsitsik wants to merge 1 commit into
pytorch:mainfrom
gtsitsik:polynomial_autograd_fixes

Conversation

@gtsitsik
Copy link
Copy Markdown

@gtsitsik gtsitsik commented May 13, 2026

Changes

  • Use sine instead of exponential to match the tutorial text.
  • Update the learning rate and iteration count to match beginner_source/examples_tensor/polynomial_tensor.py which is exactly the same tutorial but with manual gradient computations.
  • Correct the description of the x.grad attribute.
  • Remove redundant lines of code.
  • Fix typo.

Note

PR #3663 and PR #3897 recommended updating the tutorial text to show the exponential function instead of updating the code to use a sine. However, I recommend using sine everywhere since both the numpy and non-autograd pytorch polynomial tutorials use a sine.

Fixes #3632

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

- Use sine instead of exponential to match the tutorial text.
- Update the learning rate and iteration count to match
  `beginner_source/examples_tensor/polynomial_tensor.py`.
- Correct the description of the `x.grad` attribute.
- Remove redundant lines of code.
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3899

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@telamonian
Copy link
Copy Markdown

Dueling fixes (full disclosure I'm the author of #3897)!

@gtsitsik I don't know if you saw #3597, the PR that originally (and fairly recently) changed the tutorial from using sin(x) to e^x, but they gave a pretty reason for doing so. Basically, the trained Taylor expansion given in the tutorial is able to converge nicely to e^x, but does not converge that nicely to sin(x). You can see the problem in the following plots:

Untitled Untitled-1

The problem is fundamentally a mathematical one; a third order polynomial is never a good fit for sin(x), while at least over the domain [-1, 1] a third order polynomial is a good fit for e^x.

Therefore I propose that Pytorch keeps the tutorial using e^x while also keeping @gtsitsik's fixes of typos and removal of unused code. I've updated #3897 with these fixes.

@gtsitsik
Copy link
Copy Markdown
Author

gtsitsik commented May 16, 2026

@telamonian by the way I noticed your PR after I created mine.

Regarding using an exponential to get a better approximation, I don't think it really matters for the purposes of this tutorial. But I also don't mind if it remains an exponential.

However, I do think that the 3 tutorials should be consistent with each other as they seem to be aiming to demonstrate the same thing, but via different approches. That is,

  • manual gradient calculation using numpy
  • manual gradient calculation using pytorch
  • automatic gradient calculation using pytorch

All 3 should either use a sine or an exponential because otherwise they will be inconsistent with each other and their pedagogical quality becomes slightly off.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feedback about PyTorch: Tensors and autograd

2 participants