Skip to content

SALM with NeMo Automodel integration for Nemotron Nano V3 LLM backbone#15447

Merged
pzelasko merged 104 commits intomainfrom
speechlm2-with-nemo-automodel-merge
Apr 17, 2026
Merged

SALM with NeMo Automodel integration for Nemotron Nano V3 LLM backbone#15447
pzelasko merged 104 commits intomainfrom
speechlm2-with-nemo-automodel-merge

Conversation

@pzelasko
Copy link
Copy Markdown
Collaborator

@pzelasko pzelasko commented Feb 26, 2026

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add SALMAutomodel — a new SALM variant that uses NeMo Automodel instead of HuggingFace Transformers for the LLM backbone, enabling efficient training and inference of Speech LLMs with MoE architectures (e.g., Nemotron Nano V3 30B/3B) via native FSDP2, TP, EP, and Grouped GEMM support. CP support will be added later and PP support is not planned.

Collection: speechlm2, common

Changelog

  • Add SALMAutomodel model class (nemo/collections/speechlm2/models/salm_automodel.py) — a LightningModule that replaces HF AutoModelForCausalLM with NeMo Automodel's distributed LLM loader, supporting deferred initialization via configure_model() for memory-efficient per-GPU shard loading.
  • Add AutomodelParallelStrategy (nemo/collections/speechlm2/parts/parallel.py) — a custom Lightning ModelParallelStrategy that delegates device mesh creation to NeMo Automodel, supporting FSDP2, TP, PP, CP, EP (MoE), and HSDP.
  • Add native Automodel LoRA support (nemo/collections/speechlm2/parts/automodel_lora.py) — replaces HuggingFace PEFT with Automodel's built-in ModuleMatcher-based LoRA, applied before FSDP2 sharding for correct meta-device handling.
  • Add FlashPrecision plugin (nemo/utils/trainer_utils.py) — a new bf16-flash/fp16-flash precision mode that sets module dtype without mutating torch.set_default_dtype, preventing dtype conflicts with Automodel's internal initialization.
  • Add FlashOptim compatibility patch (nemo/core/optim/flash_optim.py) — fixes DTensor.from_local() shape inference for unevenly-sharded FSDP2 parameters during DCP save/load.
  • Add NemotronNanoV3PromptFormatter (nemo/collections/common/prompts/nemotron_nano_v3.py) with <think> reasoning support.
  • Add dataloader DP rank resolution patch for Automodel's device mesh (handles dp_replicate/dp_shard naming).
  • Add init_from_checkpoint support for fine-tuning from DCP directories, HuggingFace directories, and single-file .ckpt checkpoints.
  • Add to_hf.py support for distributed models (gathering DTensor shards before export).
  • Add speechlm2 pip extra in pyproject.toml/setup.py and uv index configuration for torch CUDA sources.
  • Add SALMAutomodel tutorial notebook (tutorials/speechlm2/SpeechLM_With_NeMo_Automodel.ipynb).
  • Add SALMAutomodel documentation in docs/source/speechlm2/ (models, configs, training/scaling, mixed precision).
  • Add example config examples/speechlm2/conf/salm_automodel.yaml with Nemotron Nano V3 defaults.
  • Add extensive test suite: test_salm_automodel.py, test_salm_automodel_lora.py, test_parallel.py, test_datamodule_parallel.py, test_init_from_checkpoint.py, test_flash_optim.py, test_flash_precision.py, test_nemotron_nano_v3_prompt_formatter.py, and CI integration test script.
  • Separate SALM and SALMAutomodel into independent classes (no inheritance), selectable via model.use_nemo_automodel config flag.

Usage

# Training with Automodel backend (FSDP2 + Expert Parallelism for MoE)
python examples/speechlm2/salm_train.py \
    --config-path conf \
    --config-name salm_automodel \
    ++model.use_nemo_automodel=true \
    ++model.pretrained_llm=nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
    trainer.devices=8

# With LoRA
python examples/speechlm2/salm_train.py \
    --config-path conf \
    --config-name salm_automodel \
    ++model.lora.dim=128 \
    ++model.lora.alpha=256 \
    ++model.lora.target_modules='["q_proj","v_proj"]'

# Inference — Step 1: Convert checkpoint to HuggingFace format
# Single-file checkpoint:
python examples/speechlm2/to_hf.py \
    class_path=nemo.collections.speechlm2.models.SALMAutomodel \
    ckpt_path=/path/to/checkpoint.ckpt \
    ckpt_config=/path/to/hparams.yaml \
    output_dir=/path/to/hf_checkpoint

# Distributed checkpoint (use same GPU count as training):
torchrun --nproc-per-node=8 examples/speechlm2/to_hf.py \
    class_path=nemo.collections.speechlm2.models.SALMAutomodel \
    ckpt_path=/path/to/distributed_ckpt_dir \
    ckpt_config=/path/to/hparams.yaml \
    output_dir=/path/to/hf_checkpoint

# Inference — Step 2: Run generation from the converted HF checkpoint
python examples/speechlm2/salm_generate.py \
    pretrained_name=/path/to/hf_checkpoint \
    inputs=/path/to/manifest.jsonl

# Distributed inference with model parallelism:
torchrun --nproc-per-node=8 examples/speechlm2/salm_generate.py \
    pretrained_name=/path/to/hf_checkpoint \
    inputs=/path/to/manifest.jsonl \
    tp_size=1 ep_size=8

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • nemo_automodel is added as an optional dependency via the speechlm2 pip extra (pip install nemo_toolkit[speechlm2]).
  • The FlashPrecision plugin (renamed from AutomodelPrecision) is placed in nemo/utils/trainer_utils.py for reuse by other collections (e.g., TTS) in subsequent PRs.
  • uv.lock changes are large due to adding the nemo_automodel dependency tree; the actual code changes are ~5,966 lines added across 59 files (excluding lock file).

pzelasko and others added 30 commits February 4, 2026 14:17
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…automodel's utility

Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…full LLM

Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: pzelasko <pzelasko@users.noreply.github.com>
Comment thread nemo/collections/speechlm2/data/datamodule.py Fixed
Comment thread nemo/collections/speechlm2/data/datamodule.py Fixed
Comment thread nemo/collections/speechlm2/data/datamodule.py Fixed
Comment thread nemo/collections/speechlm2/data/datamodule.py Fixed
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
pzelasko and others added 2 commits April 14, 2026 10:05
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: pzelasko <pzelasko@users.noreply.github.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
nithinraok
nithinraok previously approved these changes Apr 14, 2026
Copy link
Copy Markdown
Member

@nithinraok nithinraok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Pls move common AutoModel classes to common collection for re-use later.

@github-actions github-actions bot removed the Run CICD label Apr 14, 2026
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Comment thread tests/collections/speechlm2/test_datamodule_parallel.py Fixed
pzelasko and others added 4 commits April 14, 2026 19:14
…tainer

Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
@github-actions
Copy link
Copy Markdown
Contributor

[🤖]: Hi @pzelasko 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully.

So it might be time to merge this PR or get some approvals.

Copy link
Copy Markdown
Collaborator

@stevehuang52 stevehuang52 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the great work, LGTM~!

By the way, do we have any kind of CI on the uv.lock file? Seems unsafe if all PRs need to change this file for few thousands of lines...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

common core Changes to NeMo Core

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants