Skip to content

doc(pd): update PaddlePaddle version to 3.3.0 and 3.4.0(develop)#5306

Merged
njzjz merged 3 commits intodeepmodeling:masterfrom
HydrogenSulfate:update_pd_doc_3.3
Mar 11, 2026
Merged

doc(pd): update PaddlePaddle version to 3.3.0 and 3.4.0(develop)#5306
njzjz merged 3 commits intodeepmodeling:masterfrom
HydrogenSulfate:update_pd_doc_3.3

Conversation

@HydrogenSulfate
Copy link
Collaborator

@HydrogenSulfate HydrogenSulfate commented Mar 11, 2026

fix issue introduced by this pr: #5157

This pull request updates the PaddlePaddle dependency versions across the codebase to ensure compatibility with the latest releases. It also improves the installation instructions in the documentation to reflect these updates and to recommend best practices for installing nightly builds.

Dependency version updates:

  • Updated the paddlepaddle-gpu dependency in the CUDA test workflow to version 3.4.0.dev20260310 (.github/workflows/test_cuda.yml).
  • Updated the paddlepaddle dependency in the Python test workflow to version 3.4.0.dev20260310 (.github/workflows/test_python.yml).

Documentation improvements:

  • Updated installation instructions in doc/install/easy-install.md and doc/install/install-from-source.md to reference paddlepaddle-gpu==3.3.0 and paddlepaddle==3.3.0 as the latest stable release versions [1] [2] [3].
  • Modified nightly build installation commands in the documentation to recommend using the -U (upgrade) flag for both GPU and CPU versions, ensuring users get the latest nightly build [1] [2] [3].

Summary by CodeRabbit

  • New Features

    • Training now supports the AdamW optimizer alongside Adam.
  • Documentation

    • Updated installation guides with newer PaddlePaddle stable and nightly package versions and improved nightly/pre-release pip syntax.
  • Chores

    • Bumped PaddlePaddle dependency versions used in CI workflows (Python and CUDA test pipelines).

Copilot AI review requested due to automatic review settings March 11, 2026 05:54
@github-actions github-actions bot added the Docs label Mar 11, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates PaddlePaddle version references in the installation docs to the 3.3.0 stable release, and bumps the PaddlePaddle nightly/dev pins used by CI workflows.

Changes:

  • Update PaddlePaddle install commands in docs from 3.1.1 to 3.3.0 (CPU and cu126 GPU).
  • Adjust the commented nightly install examples in docs to include -U.
  • Update GitHub Actions CI to install PaddlePaddle nightly dev builds (now pinned to 3.4.0.dev20260310).

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
doc/install/install-from-source.md Bumps documented PaddlePaddle install version to 3.3.0 and tweaks nightly example commands.
doc/install/easy-install.md Bumps documented PaddlePaddle install version to 3.3.0 and tweaks nightly example commands.
.github/workflows/test_python.yml Updates CI PaddlePaddle CPU nightly/dev pin to 3.4.0.dev20260310.
.github/workflows/test_cuda.yml Updates CI PaddlePaddle GPU nightly/dev pin to 3.4.0.dev20260310.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

Bumps PaddlePaddle versions in CI workflows and updates installation docs to reference newer stable/nightly package versions and pip flags; also enables AdamW optimizer handling in the training logic. No public API signatures were changed.

Changes

Cohort / File(s) Summary
CI Workflows
.github/workflows/test_cuda.yml, .github/workflows/test_python.yml
Updated CI install steps to use PaddlePaddle/nightly 3.4.0.dev20260310 (was 3.3.0.dev20251204) and updated the wheel/package version strings.
Installation Documentation
doc/install/easy-install.md, doc/install/install-from-source.md
Updated stable Paddle versions from 3.1.13.3.0; adjusted nightly/pre-release pip commands to include -U and use updated nightly URLs/flags and minor formatting.
Training Logic
deepmd/pd/train/training.py
Extended optimizer checks to treat AdamW the same as Adam (conditions changed to include AdamW) so AdamW is supported in training flow.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

Python

Suggested reviewers

  • njzjz
  • wanghan-iapcm
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main changes in the PR, which focus on updating PaddlePaddle versions to 3.3.0 (stable) and 3.4.0 (develop) across documentation and CI workflows.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.28%. Comparing base (f22456f) to head (6d05344).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5306      +/-   ##
==========================================
- Coverage   82.28%   82.28%   -0.01%     
==========================================
  Files         773      773              
  Lines       77330    77330              
  Branches     3660     3660              
==========================================
- Hits        63631    63629       -2     
  Misses      12528    12528              
- Partials     1171     1173       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@HydrogenSulfate HydrogenSulfate changed the title doc(pd): tpdate PaddlePaddle version to 3.3.0 doc(pd): tpdate PaddlePaddle version to 3.3.0 and 3.4.0(develop) Mar 11, 2026
@HydrogenSulfate HydrogenSulfate changed the title doc(pd): tpdate PaddlePaddle version to 3.3.0 and 3.4.0(develop) doc(pd): update PaddlePaddle version to 3.3.0 and 3.4.0(develop) Mar 11, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
deepmd/pd/train/training.py (2)

650-669: ⚠️ Potential issue | 🔴 Critical

Wrong optimizer class instantiated for AdamW.

This block handles both "Adam" and "AdamW" in the conditional, but line 658 always instantiates paddle.optimizer.Adam. When the user specifies opt_type="AdamW", they expect decoupled weight decay behavior (AdamW), not L2 regularization (Adam with weight_decay).

🐛 Proposed fix to instantiate the correct optimizer
         if self.opt_type in ["Adam", "AdamW"]:
             self.scheduler = paddle.optimizer.lr.LambdaDecay(
                 learning_rate=self.lr_schedule.start_lr,
                 lr_lambda=lambda step: (
                     self.lr_schedule.value(step + self.start_step)
                     / self.lr_schedule.start_lr
                 ),
             )
-            self.optimizer = paddle.optimizer.Adam(
+            optimizer_cls = (
+                paddle.optimizer.Adam if self.opt_type == "Adam" else paddle.optimizer.AdamW
+            )
+            self.optimizer = optimizer_cls(
                 learning_rate=self.scheduler,
                 parameters=self.wrapper.parameters(),
                 beta1=float(self.opt_param["adam_beta1"]),
                 beta2=float(self.opt_param["adam_beta2"]),
                 weight_decay=float(self.opt_param["weight_decay"]),
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/train/training.py` around lines 650 - 669, The code treats "Adam"
and "AdamW" the same but always instantiates paddle.optimizer.Adam; update the
branch so that when self.opt_type == "AdamW" you instantiate
paddle.optimizer.AdamW (keeping the same learning_rate=self.scheduler,
parameters=self.wrapper.parameters(), beta1=float(self.opt_param["adam_beta1"]),
beta2=float(self.opt_param["adam_beta2"]),
weight_decay=float(self.opt_param["weight_decay"])), while retaining the
existing scheduler assignment, optimizer_state_dict restore
(self.optimizer.set_state_dict(...)) and self.scheduler.last_epoch adjustment;
ensure the conditional distinguishes between "Adam" and "AdamW" and creates the
corresponding optimizer class instead of always calling paddle.optimizer.Adam.

168-172: ⚠️ Potential issue | 🔴 Critical

AdamW is validated but never actually instantiated.

The get_opt_param function now accepts "AdamW" as a valid optimizer type, and the conditional check at line 650 allows both "Adam" and "AdamW". However, the optimizer instantiation always creates paddle.optimizer.Adam regardless of whether the user specified "Adam" or "AdamW". This silently ignores the user's optimizer choice.

The PT backend implements this correctly at line 801 with conditional selection:

cls = torch.optim.Adam if self.opt_type == "Adam" else torch.optim.AdamW

Update the PD backend optimizer instantiation to match:

  1. Conditionally select between paddle.optimizer.Adam and paddle.optimizer.AdamW based on self.opt_type
  2. Update the documentation in deepmd/utils/argcheck.py (around line 3027) to reflect that Paddle now supports both "Adam" and "AdamW"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/train/training.py` around lines 168 - 172, get_opt_param already
accepts "AdamW" but the PD optimizer creation still always instantiates
paddle.optimizer.Adam; change the instantiation logic (where optimizer is
created using self.opt_type, e.g., in the training class that currently calls
paddle.optimizer.Adam) to choose paddle.optimizer.Adam if self.opt_type ==
"Adam" else paddle.optimizer.AdamW so AdamW is actually used when requested, and
update the user-facing enum/documentation in deepmd/utils/argcheck.py (the
optimizer choices text around the optimizer argument) to list both "Adam" and
"AdamW".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@deepmd/pd/train/training.py`:
- Around line 650-669: The code treats "Adam" and "AdamW" the same but always
instantiates paddle.optimizer.Adam; update the branch so that when self.opt_type
== "AdamW" you instantiate paddle.optimizer.AdamW (keeping the same
learning_rate=self.scheduler, parameters=self.wrapper.parameters(),
beta1=float(self.opt_param["adam_beta1"]),
beta2=float(self.opt_param["adam_beta2"]),
weight_decay=float(self.opt_param["weight_decay"])), while retaining the
existing scheduler assignment, optimizer_state_dict restore
(self.optimizer.set_state_dict(...)) and self.scheduler.last_epoch adjustment;
ensure the conditional distinguishes between "Adam" and "AdamW" and creates the
corresponding optimizer class instead of always calling paddle.optimizer.Adam.
- Around line 168-172: get_opt_param already accepts "AdamW" but the PD
optimizer creation still always instantiates paddle.optimizer.Adam; change the
instantiation logic (where optimizer is created using self.opt_type, e.g., in
the training class that currently calls paddle.optimizer.Adam) to choose
paddle.optimizer.Adam if self.opt_type == "Adam" else paddle.optimizer.AdamW so
AdamW is actually used when requested, and update the user-facing
enum/documentation in deepmd/utils/argcheck.py (the optimizer choices text
around the optimizer argument) to list both "Adam" and "AdamW".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 8d51e7be-22d8-4f3b-ba07-5372056d5af8

📥 Commits

Reviewing files that changed from the base of the PR and between 18e7ac5 and 6d05344.

📒 Files selected for processing (1)
  • deepmd/pd/train/training.py

@njzjz njzjz enabled auto-merge March 11, 2026 17:10
@njzjz njzjz added this pull request to the merge queue Mar 11, 2026
Merged via the queue into deepmodeling:master with commit a8d2a8d Mar 11, 2026
70 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants