Skip to content

fix(pt, dpmodel): spin model finetune#5281

Merged
njzjz merged 7 commits intodeepmodeling:masterfrom
iProzd:0302_spin_ft
Mar 6, 2026
Merged

fix(pt, dpmodel): spin model finetune#5281
njzjz merged 7 commits intodeepmodeling:masterfrom
iProzd:0302_spin_ft

Conversation

@iProzd
Copy link
Collaborator

@iProzd iProzd commented Mar 2, 2026

Summary by CodeRabbit

  • New Features

    • Spin-aware preprocessing for training data, plus bias-adjustment and type-mapping that handle spin variants during finetuning.
  • Tests

    • Added comprehensive unit and end-to-end tests for spin finetuning, bias updates, transformed spin data, default-parameter propagation, and post-finetune inference validation.
  • Refactor

    • Centralized, cached sampler wrapper to standardize sampling and default-parameter injection.
  • Chores

    • Truncated bias logging for more concise diagnostics.

Copilot AI review requested due to automatic review settings March 2, 2026 09:44
@github-actions github-actions bot added the Python label Mar 2, 2026
@dosubot dosubot bot added the bug label Mar 2, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds cached, spin-aware sampler wrappers and centralizes sampler wrapping in BaseAtomicModel; introduces SpinModel wrappers for change_out_bias and change_type_map that preprocess spin and delegate to backbones; trims a training log; and adds comprehensive unit and E2E tests for spin finetuning.

Changes

Cohort / File(s) Summary
Sampler utility
deepmd/pt/model/atomic_model/base_atomic_model.py
Adds get_default_fparam and _make_wrapped_sampler to produce an lru_cache-wrapped sampler that injects pair_exclude_types, atom_exclude_types, and default fparam into sampled dicts.
Atomic-model callers
deepmd/pt/model/atomic_model/dp_atomic_model.py, deepmd/pt/model/atomic_model/linear_atomic_model.py
Replace inline cached sampler wrappers with self._make_wrapped_sampler(sampled_func) to centralize caching and sample-field injection.
Spin models (pt & native)
deepmd/pt/model/model/spin_model.py, deepmd/dpmodel/model/spin_model.py
Add spin-aware sampler wrapper methods (_get_spin_sampled_func / get_spin_sampled_func) and change_out_bias / change_type_map wrappers that spin-preprocess or extend type maps and delegate to backbone.
Training logging
deepmd/pt/train/training.py
Trims logged bias arrays to show only the first len(model_type_map) elements in the model_change_out_bias log message.
Tests (pt & dpmodel)
source/tests/pt/test_finetune_spin.py, source/tests/common/dpmodel/test_finetune_spin.py
Adds extensive unit and end-to-end tests covering spin preprocessing, change_out_bias, change_type_map delegation, default fparam injection, and finetune/load validation.
Minor import additions
deepmd/dpmodel/model/spin_model.py, deepmd/pt/model/atomic_model/base_atomic_model.py
Add functools import to support lru_cache used by new sampler wrappers.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant SpinModel
    participant SpinSampler as "Spin-wrapped Sampler"
    participant BaseAtomicModel
    participant Backbone as "Backbone Model"

    User->>SpinModel: change_out_bias(merged | lazy samples)
    SpinModel->>SpinModel: _get_spin_sampled_func(sampled_func)
    SpinModel->>SpinSampler: call wrapped sampler (cached)
    SpinSampler->>SpinSampler: apply spin preprocessing (duplicate coords/atypes, remove spin), adjust natoms, preserve extras
    SpinSampler->>BaseAtomicModel: return per-frame dicts
    BaseAtomicModel->>SpinSampler: inject exclusion masks and default fparam per frame (cached)
    SpinModel->>Backbone: change_out_bias(spin-aware sampler)
    Backbone->>Backbone: compute/apply bias and type-map changes
    Backbone-->>SpinModel: updated bias/state
    SpinModel-->>User: return
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • njzjz
  • wanghan-iapcm
  • anyangml
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'spin model finetune' accurately describes the main objective of the changeset, which adds finetune support to spin models across multiple modules.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
source/tests/pt/test_finetune_spin.py (1)

117-120: Make temp model cleanup failure-safe.

If an assertion fails before the final os.unlink(...), the temporary file is leaked. Wrap this section in try/finally and close the handle explicitly.

🧹 Proposed fix
-        tmp_model = tempfile.NamedTemporaryFile(delete=False, suffix=".pth")
-        torch.jit.save(dp, tmp_model.name)
-        dp = DeepEval(tmp_model.name)
+        tmp_model = tempfile.NamedTemporaryFile(delete=False, suffix=".pth")
+        tmp_model_path = tmp_model.name
+        tmp_model.close()
+        try:
+            torch.jit.save(dp, tmp_model_path)
+            dp = DeepEval(tmp_model_path)
+            # ... existing assertions / checks ...
+        finally:
+            os.unlink(tmp_model_path)
@@
-        os.unlink(tmp_model.name)

Also applies to: 159-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt/test_finetune_spin.py` around lines 117 - 120, Wrap the
temporary model creation and DeepEval loading in a try/finally so the
NamedTemporaryFile handle is closed and the file removed even if assertions
fail: after creating tmp_model and saving with torch.jit.save (same block that
constructs dp = torch.jit.script(model), tmp_model = NamedTemporaryFile(...),
torch.jit.save(...), dp = DeepEval(...)), call tmp_model.close() in the try body
as appropriate and ensure os.unlink(tmp_model.name) is called in the finally;
apply the same pattern to the other occurrence around the DeepEval usage at the
later block referenced in the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pt/model/atomic_model/base_atomic_model.py`:
- Around line 187-196: The current code assumes sampled is non-empty and that
every sample lacks "fparam"; update the logic to first return or skip when
sampled is empty, then fetch default_fparam = self.get_default_fparam() (if
self.has_default_fparam() and default_fparam is not None) and iterate over each
sample in sampled, checking per-sample whether "find_fparam" or "fparam" exists
(e.g., if "find_fparam" not in sample and "fparam" not in sample) before
injecting sample["fparam"] = default_fparam.repeat(nframe, 1) using that
sample's nframe (sample["atype"].shape[0]); this ensures no index error on empty
inputs and only fills missing fparam on a per-sample basis.

In `@deepmd/pt/model/model/spin_model.py`:
- Around line 391-456: There are four Ruff issues to fix: replace unused
unpacked variables by prefixing with underscore (change occurrences like
"nframes, nloc, nnei = ..." to "nframes, nloc, _ = ..." or if nframes is unused
change to "_, nloc, nnei = ..." and for the other two unpackings change
"nframes, nloc = ..." to "_, nloc = ..." wherever they occur), and move the
multi-line exception message so it is passed as the exception constructor
argument instead of being a standalone string (i.e., change the TRY003 pattern
to raise <ExceptionType>(multi-line_message) inside the function that currently
builds the message). After applying these renames and the exception fix, run
ruff check deepmd/pt/model/model/spin_model.py to verify no lint errors remain.

---

Nitpick comments:
In `@source/tests/pt/test_finetune_spin.py`:
- Around line 117-120: Wrap the temporary model creation and DeepEval loading in
a try/finally so the NamedTemporaryFile handle is closed and the file removed
even if assertions fail: after creating tmp_model and saving with torch.jit.save
(same block that constructs dp = torch.jit.script(model), tmp_model =
NamedTemporaryFile(...), torch.jit.save(...), dp = DeepEval(...)), call
tmp_model.close() in the try body as appropriate and ensure
os.unlink(tmp_model.name) is called in the finally; apply the same pattern to
the other occurrence around the DeepEval usage at the later block referenced in
the comment.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3db25a and 8b92798.

📒 Files selected for processing (7)
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/pt/model/atomic_model/base_atomic_model.py
  • deepmd/pt/model/atomic_model/dp_atomic_model.py
  • deepmd/pt/model/atomic_model/linear_atomic_model.py
  • deepmd/pt/model/model/spin_model.py
  • deepmd/pt/train/training.py
  • source/tests/pt/test_finetune_spin.py

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds spin-model finetuning support (PT + dpmodel) by ensuring finetune sampling paths apply spin preprocessing and consistently wrap samplers (exclusion types + default fparam), plus introduces new regression tests for the spin finetune workflow.

Changes:

  • Add spin finetune test suite covering change_out_bias, spin-sampler transformation, e2e finetune load, and spin+default_fparam.
  • Centralize “wrapped sampler” creation in BaseAtomicModel._make_wrapped_sampler() and reuse it from PT atomic models.
  • Add spin-model finetune helpers (get_spin_sampled_func, change_out_bias, change_type_map) in PT and dpmodel spin wrappers; adjust bias-change logging output.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
source/tests/pt/test_finetune_spin.py New tests exercising spin finetune workflows including default fparam injection.
deepmd/pt/train/training.py Adjusts output-bias change log formatting (now truncates printed bias to real type-map length).
deepmd/pt/model/model/spin_model.py Adds spin-aware sampler wrapper and routes change_out_bias/change_type_map through spin preprocessing/type-map expansion.
deepmd/pt/model/atomic_model/base_atomic_model.py Introduces _make_wrapped_sampler() (cached sampler + exclusion types + default fparam injection).
deepmd/pt/model/atomic_model/dp_atomic_model.py Reuses _make_wrapped_sampler() for stats computation (removes duplicated wrapping logic).
deepmd/pt/model/atomic_model/linear_atomic_model.py Reuses _make_wrapped_sampler() for output-stat computation (removes duplicated wrapping logic).
deepmd/dpmodel/model/spin_model.py Mirrors PT spin finetune helpers on the dpmodel spin wrapper.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
source/tests/pt/test_finetune_spin.py (1)

118-120: Always clean up the temp model file, including failure paths.

If an assertion fails before Line 159, the temp .pth file is leaked. Register cleanup immediately after creation.

♻️ Suggested refactor
-        tmp_model = tempfile.NamedTemporaryFile(delete=False, suffix=".pth")
-        torch.jit.save(dp, tmp_model.name)
-        dp = DeepEval(tmp_model.name)
+        tmp_model = tempfile.NamedTemporaryFile(delete=False, suffix=".pth")
+        tmp_model_path = tmp_model.name
+        tmp_model.close()
+        self.addCleanup(
+            lambda: os.path.exists(tmp_model_path) and os.unlink(tmp_model_path)
+        )
+        torch.jit.save(dp, tmp_model_path)
+        dp = DeepEval(tmp_model_path)
@@
-        os.unlink(tmp_model.name)

Also applies to: 159-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt/test_finetune_spin.py` around lines 118 - 120, The temp model
file created by tempfile.NamedTemporaryFile (tmp_model) is not guaranteed to be
removed if an assertion fails; register cleanup immediately after creation by
scheduling removal (e.g., store tmp_model.name and call os.unlink or use pytest
request.addfinalizer) so the file is always deleted even on failures; update the
block around torch.jit.save and DeepEval(tmp_model.name) to ensure
tmp_model.name is unlinked in all paths (referencing tmp_model,
tempfile.NamedTemporaryFile, torch.jit.save, and DeepEval).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/dpmodel/model/spin_model.py`:
- Line 373: The call is using a nonexistent .atomic_model hop and will raise
AttributeError; modify the return in the function containing this line to call
_make_wrapped_sampler directly on backbone_model (i.e., replace the
.atomic_model access), so invoke
backbone_model._make_wrapped_sampler(spin_sampled_func) instead; locate the
offending line in spin_model.py where return
self.backbone_model.atomic_model._make_wrapped_sampler(spin_sampled_func)
appears (inside the sampler/wrapper setup) and change it to call
_make_wrapped_sampler on the DPAtomicModel-typed backbone_model.

---

Nitpick comments:
In `@source/tests/pt/test_finetune_spin.py`:
- Around line 118-120: The temp model file created by
tempfile.NamedTemporaryFile (tmp_model) is not guaranteed to be removed if an
assertion fails; register cleanup immediately after creation by scheduling
removal (e.g., store tmp_model.name and call os.unlink or use pytest
request.addfinalizer) so the file is always deleted even on failures; update the
block around torch.jit.save and DeepEval(tmp_model.name) to ensure
tmp_model.name is unlinked in all paths (referencing tmp_model,
tempfile.NamedTemporaryFile, torch.jit.save, and DeepEval).

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b92798 and a08b887.

📒 Files selected for processing (4)
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/pt/model/atomic_model/dp_atomic_model.py
  • deepmd/pt/model/atomic_model/linear_atomic_model.py
  • source/tests/pt/test_finetune_spin.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pt/model/atomic_model/linear_atomic_model.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
deepmd/pt/model/model/spin_model.py (1)

394-415: Avoid duplicating spin-sample preprocessing logic.

This block duplicates the same sample transformation already present in compute_or_load_stat (Lines 481-503). Consider extracting a shared internal helper to prevent behavioral drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/model/model/spin_model.py` around lines 394 - 415, The spin-sample
preprocessing in spin_sampled_func duplicates logic from compute_or_load_stat;
extract a shared helper (e.g., _prepare_spin_sample or prepare_spin_sample) that
takes a sample dict and returns the transformed dict by calling
self.process_spin_input and handling "natoms" and passthrough keys, then replace
the body of spin_sampled_func and the equivalent block in compute_or_load_stat
to call this helper; reference functions: spin_sampled_func,
compute_or_load_stat, process_spin_input, sampled_func when updating callers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pt/model/model/spin_model.py`:
- Around line 394-415: The spin-sample preprocessing in spin_sampled_func
duplicates logic from compute_or_load_stat; extract a shared helper (e.g.,
_prepare_spin_sample or prepare_spin_sample) that takes a sample dict and
returns the transformed dict by calling self.process_spin_input and handling
"natoms" and passthrough keys, then replace the body of spin_sampled_func and
the equivalent block in compute_or_load_stat to call this helper; reference
functions: spin_sampled_func, compute_or_load_stat, process_spin_input,
sampled_func when updating callers.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a08b887 and eaa0dea.

📒 Files selected for processing (2)
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/pt/model/model/spin_model.py

@codecov
Copy link

codecov bot commented Mar 2, 2026

Codecov Report

❌ Patch coverage is 95.89041% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.31%. Comparing base (c28419c) to head (c2c9029).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
deepmd/pt/model/model/spin_model.py 91.30% 2 Missing ⚠️
deepmd/pt/model/atomic_model/base_atomic_model.py 95.65% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5281   +/-   ##
=======================================
  Coverage   82.30%   82.31%           
=======================================
  Files         767      767           
  Lines       76984    77026   +42     
  Branches     3659     3659           
=======================================
+ Hits        63360    63402   +42     
  Misses      12453    12453           
  Partials     1171     1171           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@iProzd iProzd requested review from njzjz and wanghan-iapcm March 2, 2026 11:51
@iProzd iProzd requested a review from wanghan-iapcm March 4, 2026 13:35
@wanghan-iapcm wanghan-iapcm enabled auto-merge March 5, 2026 15:04
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Mar 5, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 5, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
source/tests/pt/test_finetune_spin.py (1)

316-323: test_finetune_change_type_map() never runs in this file.

The test returns immediately when self.mixed_types is false, and the only SpinFinetuneE2ETest subclass here sets self.mixed_types = False. That leaves the PT change_type_map finetune path introduced in this PR without an actual end-to-end check. Please add a mixed-types fixture/class or parametrize one.

Also applies to: 365-406

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt/test_finetune_spin.py` around lines 316 - 323, The test
test_finetune_change_type_map never executes because SpinFinetuneE2ETest
subclasses set self.mixed_types = False; update the test harness so at least one
test class/fixture runs with mixed_types = True (or parametrize the test to run
both True/False). Specifically, either add a new subclass of SpinFinetuneE2ETest
that sets mixed_types = True (or modify the existing test collection to
parametrize mixed_types) so test_finetune_change_type_map and the other affected
tests (lines ~365-406) exercise the mixed-types PT finetune path; ensure the new
class/param uses the same setup/teardown and naming conventions so the test
runner picks it up.
source/tests/common/dpmodel/test_finetune_spin.py (1)

218-229: This assertion doesn't verify the _spin map expansion.

DescrptSeA.change_type_map() raises NotImplementedError regardless of how SpinModel.change_type_map() constructs the forwarded map, so this passes even if the wrapper never appends the virtual types or forwards the wrong order. Please assert the expanded backbone map directly, or use a remappable descriptor, so the new delegation logic is actually covered.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/common/dpmodel/test_finetune_spin.py` around lines 218 - 229,
The test test_change_type_map is currently ineffective because
DescrptSeA.change_type_map always raises NotImplementedError, so replace or wrap
the backbone used by self.model (SpinModel) with a spy/mocking stub that records
the argument passed to SpinModel.change_type_map, then call
self.model.change_type_map(new_type_map) and assert the recorded argument equals
the expanded map (each original type with a corresponding "_spin" virtual type
appended in correct order, e.g., ["O","O_spin","Ni","Ni_spin"]), or
alternatively use a remappable descriptor backend that doesn't raise and assert
its received map; update references to SpinModel.change_type_map and the
backbone stub/descriptor accordingly so the delegation and suffixing are
actually verified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pt/model/model/spin_model.py`:
- Around line 449-458: change_type_map currently updates the backbone_model but
leaves SpinModel's spin-related metadata stale; update ntpes_real,
virtual_scale_mask, and spin_mask inside change_type_map after calling
self.backbone_model.change_type_map so the wrapper stays consistent—preferably
reload these from model_with_new_type_stat when provided, otherwise recompute
them from the updated backbone_model (the same logic used in __init__ that
process_spin_input/process_spin_output expect) to avoid mismatched type ids
after reordering or adding types.

In `@deepmd/pt/train/training.py`:
- Around line 1836-1838: The log currently flattens old_bias/new_bias before
trimming, which drops extra components for multi-output types; instead, convert
biases to numpy with to_numpy_array(old_bias)/to_numpy_array(new_bias), slice
the type axis first using [: len(model_type_map)] (because
SpinModel.get_type_map exposes real types), then reshape/flatten for
logging—apply the same change wherever the f-string referencing model_type_map,
old_bias, new_bias appears so the full per-type multi-output bias components are
preserved in the log.

---

Nitpick comments:
In `@source/tests/common/dpmodel/test_finetune_spin.py`:
- Around line 218-229: The test test_change_type_map is currently ineffective
because DescrptSeA.change_type_map always raises NotImplementedError, so replace
or wrap the backbone used by self.model (SpinModel) with a spy/mocking stub that
records the argument passed to SpinModel.change_type_map, then call
self.model.change_type_map(new_type_map) and assert the recorded argument equals
the expanded map (each original type with a corresponding "_spin" virtual type
appended in correct order, e.g., ["O","O_spin","Ni","Ni_spin"]), or
alternatively use a remappable descriptor backend that doesn't raise and assert
its received map; update references to SpinModel.change_type_map and the
backbone stub/descriptor accordingly so the delegation and suffixing are
actually verified.

In `@source/tests/pt/test_finetune_spin.py`:
- Around line 316-323: The test test_finetune_change_type_map never executes
because SpinFinetuneE2ETest subclasses set self.mixed_types = False; update the
test harness so at least one test class/fixture runs with mixed_types = True (or
parametrize the test to run both True/False). Specifically, either add a new
subclass of SpinFinetuneE2ETest that sets mixed_types = True (or modify the
existing test collection to parametrize mixed_types) so
test_finetune_change_type_map and the other affected tests (lines ~365-406)
exercise the mixed-types PT finetune path; ensure the new class/param uses the
same setup/teardown and naming conventions so the test runner picks it up.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 1ca00612-129e-4380-a405-e2a648336f16

📥 Commits

Reviewing files that changed from the base of the PR and between eaa0dea and 2c4294f.

📒 Files selected for processing (8)
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/pt/model/atomic_model/base_atomic_model.py
  • deepmd/pt/model/atomic_model/dp_atomic_model.py
  • deepmd/pt/model/atomic_model/linear_atomic_model.py
  • deepmd/pt/model/model/spin_model.py
  • deepmd/pt/train/training.py
  • source/tests/common/dpmodel/test_finetune_spin.py
  • source/tests/pt/test_finetune_spin.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • deepmd/pt/model/atomic_model/base_atomic_model.py
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/pt/model/atomic_model/linear_atomic_model.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@source/tests/common/dpmodel/test_finetune_spin.py`:
- Around line 99-107: The fixture currently sets "find_energy" and "find_fparam"
as np.float32 which hides type-sensitive bugs; change those entries in the test
fixture object to boolean sentinels (prefer np.bool_ to match production) so the
wrapped sampler receives booleans — update the dict keys "find_energy" and
"find_fparam" in test_finetune_spin.py to use np.bool_(True/False) (or
True/False) instead of np.float32(1.0)/np.float32(0.0).
- Around line 219-230: The test currently only checks that change_type_map
raises NotImplementedError and misses verifying that SpinModel expands types
with "_spin" before delegating; update test_change_type_map to spy/mock the
backbone change_type_map (e.g., replace self.model.backbone.change_type_map with
a MagicMock or patch on backbone_model.change_type_map), call
self.model.change_type_map(["O","Ni"]), assert the mock was called once with
["O_spin","Ni_spin"], and still assert that a NotImplementedError is raised (or
re-raise the backbone's NotImplementedError from the mock) to preserve the
original exception expectation; reference SpinModel.change_type_map and the
backbone_model.change_type_map when locating code to change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: d0e69c2a-9883-4529-a6f5-a026cf11ce6b

📥 Commits

Reviewing files that changed from the base of the PR and between 2c4294f and c2c9029.

📒 Files selected for processing (1)
  • source/tests/common/dpmodel/test_finetune_spin.py

@njzjz njzjz enabled auto-merge March 6, 2026 13:33
@njzjz njzjz closed this Mar 6, 2026
auto-merge was automatically disabled March 6, 2026 13:34

Pull request was closed

@njzjz njzjz reopened this Mar 6, 2026
@njzjz njzjz enabled auto-merge March 6, 2026 13:34
@njzjz njzjz added this pull request to the merge queue Mar 6, 2026
Merged via the queue into deepmodeling:master with commit 8f2b3c9 Mar 6, 2026
126 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants