Skip to content

Clarification on [1] in swinunetr_pretrained README and training pipeline of ssl_pretrained_weights.pth #2060

@SII-EugeneWong

Description

@SII-EugeneWong

Hi MONAI team,

I’m trying to understand the origin of the pretrained weight:
ssl_pretrained_weights.pth (from MONAI-extra-test-data, ~719MB).

From the checkpoint structure, it contains:

  • encoder.*
  • decoder*
  • out.conv.*
  • encoder.mask_token

which looks like a full encoder-decoder model.


I found the following description in the tutorial README:
https://github.com/Project-MONAI/tutorials/blob/main/self_supervised_pretraining/swinunetr_pretrained/README.md

"The entire SwinUNETR model including encoder and decoder was trained end-to-end using self-supervised learning techniques as outlined in [1]."

However, it is unclear what [1] refers to in terms of actual training code or pipeline.


Could you please help clarify:

  1. What exactly does reference [1] correspond to?
    (Is it a specific paper, or an internal training implementation?)

  2. Was ssl_pretrained_weights.pth trained using the
    research-contributions/SwinUNETR/Pretrain (SSLHead: rotation + contrastive + reconstruction),
    or a different encoder-decoder / autoencoder-style pretraining pipeline?

A short clarification would already help a lot.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions