doc(pd): update PaddlePaddle version to 3.3.0 and 3.4.0(develop)#5306
doc(pd): update PaddlePaddle version to 3.3.0 and 3.4.0(develop)#5306njzjz merged 3 commits intodeepmodeling:masterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Updates PaddlePaddle version references in the installation docs to the 3.3.0 stable release, and bumps the PaddlePaddle nightly/dev pins used by CI workflows.
Changes:
- Update PaddlePaddle install commands in docs from 3.1.1 to 3.3.0 (CPU and cu126 GPU).
- Adjust the commented nightly install examples in docs to include
-U. - Update GitHub Actions CI to install PaddlePaddle nightly dev builds (now pinned to
3.4.0.dev20260310).
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
doc/install/install-from-source.md |
Bumps documented PaddlePaddle install version to 3.3.0 and tweaks nightly example commands. |
doc/install/easy-install.md |
Bumps documented PaddlePaddle install version to 3.3.0 and tweaks nightly example commands. |
.github/workflows/test_python.yml |
Updates CI PaddlePaddle CPU nightly/dev pin to 3.4.0.dev20260310. |
.github/workflows/test_cuda.yml |
Updates CI PaddlePaddle GPU nightly/dev pin to 3.4.0.dev20260310. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
📝 WalkthroughWalkthroughBumps PaddlePaddle versions in CI workflows and updates installation docs to reference newer stable/nightly package versions and pip flags; also enables AdamW optimizer handling in the training logic. No public API signatures were changed. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #5306 +/- ##
==========================================
- Coverage 82.28% 82.28% -0.01%
==========================================
Files 773 773
Lines 77330 77330
Branches 3660 3660
==========================================
- Hits 63631 63629 -2
Misses 12528 12528
- Partials 1171 1173 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…into update_pd_doc_3.3
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
deepmd/pd/train/training.py (2)
650-669:⚠️ Potential issue | 🔴 CriticalWrong optimizer class instantiated for AdamW.
This block handles both "Adam" and "AdamW" in the conditional, but line 658 always instantiates
paddle.optimizer.Adam. When the user specifiesopt_type="AdamW", they expect decoupled weight decay behavior (AdamW), not L2 regularization (Adam with weight_decay).🐛 Proposed fix to instantiate the correct optimizer
if self.opt_type in ["Adam", "AdamW"]: self.scheduler = paddle.optimizer.lr.LambdaDecay( learning_rate=self.lr_schedule.start_lr, lr_lambda=lambda step: ( self.lr_schedule.value(step + self.start_step) / self.lr_schedule.start_lr ), ) - self.optimizer = paddle.optimizer.Adam( + optimizer_cls = ( + paddle.optimizer.Adam if self.opt_type == "Adam" else paddle.optimizer.AdamW + ) + self.optimizer = optimizer_cls( learning_rate=self.scheduler, parameters=self.wrapper.parameters(), beta1=float(self.opt_param["adam_beta1"]), beta2=float(self.opt_param["adam_beta2"]), weight_decay=float(self.opt_param["weight_decay"]), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pd/train/training.py` around lines 650 - 669, The code treats "Adam" and "AdamW" the same but always instantiates paddle.optimizer.Adam; update the branch so that when self.opt_type == "AdamW" you instantiate paddle.optimizer.AdamW (keeping the same learning_rate=self.scheduler, parameters=self.wrapper.parameters(), beta1=float(self.opt_param["adam_beta1"]), beta2=float(self.opt_param["adam_beta2"]), weight_decay=float(self.opt_param["weight_decay"])), while retaining the existing scheduler assignment, optimizer_state_dict restore (self.optimizer.set_state_dict(...)) and self.scheduler.last_epoch adjustment; ensure the conditional distinguishes between "Adam" and "AdamW" and creates the corresponding optimizer class instead of always calling paddle.optimizer.Adam.
168-172:⚠️ Potential issue | 🔴 CriticalAdamW is validated but never actually instantiated.
The
get_opt_paramfunction now accepts "AdamW" as a valid optimizer type, and the conditional check at line 650 allows both "Adam" and "AdamW". However, the optimizer instantiation always createspaddle.optimizer.Adamregardless of whether the user specified "Adam" or "AdamW". This silently ignores the user's optimizer choice.The PT backend implements this correctly at line 801 with conditional selection:
cls = torch.optim.Adam if self.opt_type == "Adam" else torch.optim.AdamWUpdate the PD backend optimizer instantiation to match:
- Conditionally select between
paddle.optimizer.Adamandpaddle.optimizer.AdamWbased onself.opt_type- Update the documentation in
deepmd/utils/argcheck.py(around line 3027) to reflect that Paddle now supports both "Adam" and "AdamW"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pd/train/training.py` around lines 168 - 172, get_opt_param already accepts "AdamW" but the PD optimizer creation still always instantiates paddle.optimizer.Adam; change the instantiation logic (where optimizer is created using self.opt_type, e.g., in the training class that currently calls paddle.optimizer.Adam) to choose paddle.optimizer.Adam if self.opt_type == "Adam" else paddle.optimizer.AdamW so AdamW is actually used when requested, and update the user-facing enum/documentation in deepmd/utils/argcheck.py (the optimizer choices text around the optimizer argument) to list both "Adam" and "AdamW".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@deepmd/pd/train/training.py`:
- Around line 650-669: The code treats "Adam" and "AdamW" the same but always
instantiates paddle.optimizer.Adam; update the branch so that when self.opt_type
== "AdamW" you instantiate paddle.optimizer.AdamW (keeping the same
learning_rate=self.scheduler, parameters=self.wrapper.parameters(),
beta1=float(self.opt_param["adam_beta1"]),
beta2=float(self.opt_param["adam_beta2"]),
weight_decay=float(self.opt_param["weight_decay"])), while retaining the
existing scheduler assignment, optimizer_state_dict restore
(self.optimizer.set_state_dict(...)) and self.scheduler.last_epoch adjustment;
ensure the conditional distinguishes between "Adam" and "AdamW" and creates the
corresponding optimizer class instead of always calling paddle.optimizer.Adam.
- Around line 168-172: get_opt_param already accepts "AdamW" but the PD
optimizer creation still always instantiates paddle.optimizer.Adam; change the
instantiation logic (where optimizer is created using self.opt_type, e.g., in
the training class that currently calls paddle.optimizer.Adam) to choose
paddle.optimizer.Adam if self.opt_type == "Adam" else paddle.optimizer.AdamW so
AdamW is actually used when requested, and update the user-facing
enum/documentation in deepmd/utils/argcheck.py (the optimizer choices text
around the optimizer argument) to list both "Adam" and "AdamW".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 8d51e7be-22d8-4f3b-ba07-5372056d5af8
📒 Files selected for processing (1)
deepmd/pd/train/training.py
fix issue introduced by this pr: #5157
This pull request updates the PaddlePaddle dependency versions across the codebase to ensure compatibility with the latest releases. It also improves the installation instructions in the documentation to reflect these updates and to recommend best practices for installing nightly builds.
Dependency version updates:
paddlepaddle-gpudependency in the CUDA test workflow to version3.4.0.dev20260310(.github/workflows/test_cuda.yml).paddlepaddledependency in the Python test workflow to version3.4.0.dev20260310(.github/workflows/test_python.yml).Documentation improvements:
doc/install/easy-install.mdanddoc/install/install-from-source.mdto referencepaddlepaddle-gpu==3.3.0andpaddlepaddle==3.3.0as the latest stable release versions [1] [2] [3].-U(upgrade) flag for both GPU and CPU versions, ensuring users get the latest nightly build [1] [2] [3].Summary by CodeRabbit
New Features
Documentation
Chores