oir_flatmount_segmentation - Hartnett Lab - A MONAI bundle for automated rat and mouse oxygen-induced retinopathy flatmount segmentation (total retina/avascular area/intravitreal neovascularization)#778
Conversation
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
for more information, see https://pre-commit.ci
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a complete OIR Flatmount Segmentation bundle: two model implementations (Attention U-Net and transformer U-Net), MONAI train/inference configs and metadata, documentation and license, 5-fold checkpoints and per-fold thresholds, plotting utility, requirements, and release/large-file manifests. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (8)
models/oir_flatmount_segmentation/scripts/plot_learning_curves.py (2)
39-44: Missing retina Dice metric in plots.The script plots
val_dice_nvandval_dice_vobut omitsval_dice_retina(or equivalent). Since the model has three segmentation heads (retina, nv, vo), consider adding the retina Dice plot for completeness, or document why it's intentionally excluded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/scripts/plot_learning_curves.py` around lines 39 - 44, The loop over histories currently plots train_loss, val_loss, val_dice_nv and val_dice_vo but omits the retina Dice; update the plotting loop to also plot df["val_dice_retina"] (or the correct column name if different) with a label f"fold_{fold}" on an available axes slot (e.g., add a third subplot or replace one of the axis positions by expanding axes to 2x3 or add axes[1,2]) so that the retina segmentation head's Dice is visualized alongside nv and vo; ensure the legend and axis titles are updated accordingly to include the retina metric and use the same alpha/label conventions as the other plots.
14-23: Hardcoded fold count limits flexibility.The function iterates over a fixed range of 5 folds. If the number of folds changes in future experiments, this would require code modification.
Optional: Make fold count configurable or auto-detect
-def _load_histories(kfold_dir: str) -> List[pd.DataFrame]: +def _load_histories(kfold_dir: str, num_folds: int = 5) -> List[pd.DataFrame]: histories: List[pd.DataFrame] = [] - for fold in range(5): + for fold in range(num_folds): csv_path = os.path.join(kfold_dir, f"fold_{fold}", "training_history.csv")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/scripts/plot_learning_curves.py` around lines 14 - 23, The _load_histories function currently loops over a hardcoded range(5); make it flexible by auto-detecting fold subdirectories or accepting a fold_count parameter: inspect kfold_dir for directories matching "fold_*" (or list entries and filter for os.path.isdir and name pattern) and iterate over those discovered fold names to build csv_path and read training_history.csv, preserving the existing checks (os.path.exists, not df.empty) and adding the fold identifier from the directory name; update related callers if you add an argument to _load_histories.models/oir_flatmount_segmentation/model.py (2)
196-204: Extract repeated size tuple to reduce duplication.The tuple
(x.shape[2], x.shape[3])is computed 6 times inforward_with_aux. Extracting it to a local variable improves readability.Suggested refactor
def forward_with_aux(self, x): # Returns main logits and deep supervision logits upsampled to input size + target_size = (x.shape[2], x.shape[3]) e1 = self.enc1(x) # ... encoder/decoder code ... # ds2 from d2 - ds2_r = F.interpolate(self.retina_ds2_head(d2), size=(x.shape[2], x.shape[3]), mode="bilinear", align_corners=False) - ds2_nv = F.interpolate(self.nv_ds2_head(d2), size=(x.shape[2], x.shape[3]), mode="bilinear", align_corners=False) - ds2_vo = F.interpolate(self.vo_ds2_head(d2), size=(x.shape[2], x.shape[3]), mode="bilinear", align_corners=False) + ds2_r = F.interpolate(self.retina_ds2_head(d2), size=target_size, mode="bilinear", align_corners=False) + ds2_nv = F.interpolate(self.nv_ds2_head(d2), size=target_size, mode="bilinear", align_corners=False) + ds2_vo = F.interpolate(self.vo_ds2_head(d2), size=target_size, mode="bilinear", align_corners=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/model.py` around lines 196 - 204, In forward_with_aux, extract the repeated size tuple (x.shape[2], x.shape[3]) into a local variable (e.g., target_size) and replace all occurrences in the F.interpolate calls for ds2_* and ds3_* (ds2_r, ds2_nv, ds2_vo, ds3_r, ds3_nv, ds3_vo) to use that variable; this reduces duplication and improves readability while leaving the existing heads (retina_ds2_head, nv_ds2_head, vo_ds2_head, retina_ds3_head, nv_ds3_head, vo_ds3_head) and the torch.cat logic unchanged.
15-20: Remove unnecessary.contiguous()calls in ConvBNReLU that add overhead without benefit.The
.contiguous()calls on input (line 16) and after convolution (line 17) are defensive but redundant. PyTorch'snn.Conv2dnatively handles non-contiguous input tensors, and its output is already contiguous. The only necessary use of.contiguous()in this codebase is aftertorch.cat()operations (line 94), which can produce non-contiguous tensors.Suggested simplification
def forward(self, x): - x = x.contiguous() - x = self.conv(x).contiguous() + x = self.conv(x) x = self.bn(x) x = self.relu(x) return x🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/model.py` around lines 15 - 20, In the ConvBNReLU.forward implementation remove the unnecessary .contiguous() calls on the input tensor and the convolution output: drop the .contiguous() on x before self.conv and the .contiguous() immediately after self.conv so forward simply performs x = self.conv(x); x = self.bn(x); x = self.relu(x); return x; retain .contiguous() usage only for places that actually require it (e.g., after torch.cat operations elsewhere in the model) to avoid added overhead.models/oir_flatmount_segmentation/model_transformer.py (1)
92-118: Significant code duplication betweenforwardandforward_with_aux.The encoder-decoder path (lines 93-112 in
forwardand lines 121-140 inforward_with_aux) is nearly identical. Consider extracting the shared encoder-decoder computation to a helper method.Suggested refactor
+def _encode_decode(self, x): + """Shared encoder-decoder computation.""" + f1, f2, f3, f4 = self.enc(x) + p1, p2, p3, p4 = self.proj1(f1), self.proj2(f2), self.proj3(f3), self.proj4(f4) + + y3_u = self.dec3["up"](p4) + y3_s = self.dec3["att"](p3, y3_u) + y3 = self.dec3["conv"](torch.cat([y3_u, y3_s], dim=1).contiguous()) + + y2_u = self.dec2["up"](y3) + y2_s = self.dec2["att"](p2, y2_u) + y2 = self.dec2["conv"](torch.cat([y2_u, y2_s], dim=1).contiguous()) + + y1_u = self.dec1["up"](y2) + y1_s = self.dec1["att"](p1, y1_u) + y1 = self.dec1["conv"](torch.cat([y1_u, y1_s], dim=1).contiguous()) + + y0 = self.dec0(y1.contiguous()) + return y0, y2, y3 + def forward(self, x): - # ... duplicated encoder-decoder code ... + y0, _, _ = self._encode_decode(x) + target_size = (x.shape[2], x.shape[3]) + logits_r = F.interpolate(self.retina_head(y0), size=target_size, mode="bilinear", align_corners=False) + # ... rest of head computation ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/model_transformer.py` around lines 92 - 118, Extract the common encoder→decoder sequence from forward and forward_with_aux into a single helper (e.g., _encode_decode or compute_y0) that takes input x and returns the decoded feature y0 (or the per-head upsampled logits if you prefer), by reusing self.enc, self.proj1..4, self.dec3/2/1 (up/att/conv) and self.dec0; then have forward and forward_with_aux call that helper and only perform their remaining unique steps (e.g., creating upsampled heads with self.retina_head, self.nv_head, self.vo_head and adding auxiliary outputs in forward_with_aux). Ensure you reference and reuse the existing symbols: forward, forward_with_aux, enc, proj1/proj2/proj3/proj4, dec3/dec2/dec1/dec0, retina_head, nv_head, vo_head so only the duplicated encoder-decoder logic is moved to the helper.models/oir_flatmount_segmentation/requirements.txt (1)
1-9: Add version constraints to dependencies for reproducibility.Unpinned dependencies can cause reproducibility issues when the bundle is used in the future. Different versions of
monai,torch, ortimmmay produce different results or introduce breaking API changes.At minimum, pin major versions for critical dependencies:
monaiandtorchfor model loading and inference consistencytimmsince the transformer encoder relies on its backbone APIExample with version constraints
-monai -torch -timm +monai>=1.3,<2.0 +torch>=2.0,<3.0 +timm>=0.9,<1.0 numpy pandas opencv-python matplotlib albumentations openpyxl🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/requirements.txt` around lines 1 - 9, The requirements.txt currently lists unpinned packages (monai, torch, timm, numpy, pandas, opencv-python, matplotlib, albumentations, openpyxl) which hurts reproducibility; update the file to add version constraints (at least pin major versions) for monai, torch, and timm to ensure model loading and backbone API stability and pin the other libraries as well; use a consistent scheme such as ">=<current_major>.*,<next_major>" or exact minor pins determined from a known-working environment and regenerate or lock the file so installs reproduce the tested setup.models/oir_flatmount_segmentation/configs/metadata.json (1)
45-51: Specify explicit versions for optional packages.The optional package versions are empty strings, which may cause reproducibility issues across different environments.
📦 Recommendation: Pin package versions
Consider specifying explicit version constraints for all optional packages to ensure reproducible builds and avoid compatibility issues:
"optional_packages_version": { - "timm": "", - "albumentations": "", - "opencv-python": "", - "pandas": "", - "matplotlib": "", - "openpyxl": "" + "timm": ">=0.9.0", + "albumentations": ">=1.3.0", + "opencv-python": ">=4.8.0", + "pandas": ">=2.0.0", + "matplotlib": ">=3.7.0", + "openpyxl": ">=3.1.0" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/configs/metadata.json` around lines 45 - 51, The metadata field "optional_packages_version" contains empty strings for packages (timm, albumentations, opencv-python, pandas, matplotlib, openpyxl); update that object in metadata.json to pin explicit, compatible version constraints (e.g. exact versions or narrow semver ranges like == or ~=) for each package to ensure reproducible installs—edit the "optional_packages_version" entry and set a concrete version string for "timm", "albumentations", "opencv-python", "pandas", "matplotlib", and "openpyxl" consistent with the project's tested environment.models/oir_flatmount_segmentation/README.md (1)
3-3: Fix heading level increments for markdown compliance.Markdown headings should increment by one level at a time. Two violations detected:
- Line 3:
### **Authors**jumps from h1 to h3 (should be h2)- Line 27:
#### **Preprocessing**jumps from h2 (## **Data**) to h4 (should be h3)This affects document structure parsing and accessibility tools.
📝 Proposed fix
-### **Authors** +## **Authors** Neal S. Shah*, Aniket Ramshekar*, Bright Asare-Bediako, Morgan P. Tankersley, Heng-Chiao Huang, Shreya Beri, Eric Kunz, Aaron Y. Lee, M. Elizabeth Hartnett-#### **Preprocessing** +### **Preprocessing** Input retinal flatmount images were converted to grayscale, resized to 512×512, and intensity-normalized.As per static analysis, these are MD001 (heading-increment) violations.
Also applies to: 27-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/oir_flatmount_segmentation/README.md` at line 3, Update the markdown heading levels so they increment by one: change the "### **Authors**" heading to "## **Authors**" and change the "#### **Preprocessing**" heading to "### **Preprocessing**" (it sits under the "## **Data**" section), ensuring both headings follow a single-level increment for proper MD001 compliance and correct document structure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@models/oir_flatmount_segmentation/configs/inference.json`:
- Around line 50-54: The postprocessing section in inference.json is empty,
causing MONAI bundle inference to return raw logits from the model (as emitted
by model_transformer.py); add a minimal postprocessing pipeline by populating
the "postprocessing" Compose transforms to include a Sigmoid activation
(monai.transforms.Activations with sigmoid=True) so logits are converted to
probabilities (optionally followed by per-class thresholding later), ensuring
standard monai.bundle.run users get meaningful probabilistic outputs rather than
unbounded logits.
In `@models/oir_flatmount_segmentation/configs/metadata.json`:
- Around line 36-40: The channels array in metadata.json lists "retina", "ivnv",
and "ava" which is inconsistent with the threshold files and
release_manifest.json that use "retina", "nv", and "vo"; update the channels
entry in the metadata configuration (the "channels" array) to exactly match the
canonical names ("retina", "nv", "vo") or, if "ivnv" and "ava" are intended
aliases, add a clear documented mapping in the same config (or an adjacent
README) stating ivnv -> nv and ava -> vo so all consumers (threshold files,
release_manifest.json, and inference code) use a consistent canonical name.
In `@models/oir_flatmount_segmentation/configs/train.json`:
- Around line 22-29: The config's "preprocessing" and "postprocessing" entries
use monai.transforms.Compose with empty "transforms" arrays (see "_target_":
"monai.transforms.Compose") which conflicts with the README's described
augmentations; either update these entries to include the augmentation pipeline
(random flips, rotations, brightness/contrast, CLAHE, Gaussian noise,
elastic/grid/optical distortions, coarse dropout, motion blur, gamma) so the
config reflects what's described, or add a clear README note referencing that
augmentation is implemented in the external scripts (retrain_kfold_v2.py and
train_with_split.py) rather than in train.json/validate sections; apply the same
fix to the validate config blocks mentioned.
- Line 3: The config's dataset_dir field is hard-coded to an absolute path which
breaks portability; update the dataset_dir entry to use a relative path or a
parameterized placeholder (e.g., an environment variable token like DATASET_DIR)
and ensure the training/config loader honors that override, so consumers can set
DATASET_DIR or pass a CLI/config override instead of relying on
"/workspace/data/oir_flatmount_segmentation".
In `@models/oir_flatmount_segmentation/docs/data_license.txt`:
- Around line 26-27: Replace the placeholder TODO in
models/oir_flatmount_segmentation/docs/data_license.txt with the final
institution-approved licensing language: remove the “7. TODO before release: -
Replace this file with final institution-approved licensing language.” entry and
paste the approved license text into this file, ensure the file header/section
numbering and any metadata (license name, effective date, attribution) match the
institution's requirements, and commit with a clear message indicating the
license was updated for release.
In `@models/oir_flatmount_segmentation/docs/POST_TRAINING_CHECKLIST.md`:
- Line 15: Replace the absolute local filesystem paths in the
POST_TRAINING_CHECKLIST.md (lines referencing the example dataset and model
weights) with generic, non-identifying placeholders or relative/example paths
(e.g., ./data/oir_flatmount_segmentation_release or <path-to-weights>/fold_*)
and remove any username-specific segments; update the entries that currently
contain `/Users/nealshah/...` so they read as portable guidance or commented
examples (or delete them entirely) and ensure the file only contains
non-sensitive, reusable path examples referenced by the checklist.
- Around line 23-28: Update the metadata schema URL and related submission
artifacts: open metadata.json and replace the failing schema URL with
https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json
so verify_metadata can validate; upload your fold model files and update their
public URLs in large_files.yml; add the final dataset license/legal wording to
docs/data_license.txt; optionally add TorchScript export artifacts/config if
your deployment requires them. Ensure verify_metadata (the MONAI validation
step) runs successfully after the schema URL change.
In `@models/oir_flatmount_segmentation/docs/RELEASE_RESULTS.md`:
- Around line 23-24: Remove the hardcoded absolute paths in RELEASE_RESULTS.md
and replace them with portable placeholders or relative paths; specifically
update the two lines showing `/.../cv_summary.csv` and `/.../cv_summary.json` to
use either relative references like `output_kfold_v2/cv_summary.csv` and
`output_kfold_v2/cv_summary.json` or generic placeholders such as
`<OUTPUT_DIR>/cv_summary.csv` and `<OUTPUT_DIR>/cv_summary.json` so the docs are
not tied to a developer's local machine.
- Line 43: Remove the hardcoded absolute path string
"/Users/nealshah/Documents/OIR_segmentation_images/oir_flatmount_segmentation_release_v2_20260317"
in models/oir_flatmount_segmentation/docs/RELEASE_RESULTS.md and replace it with
a portable placeholder or relative path (e.g.,
./oir_flatmount_segmentation_release_v2_20260317 or <release_artifact_path>) so
the documentation no longer contains a developer-specific absolute local path.
In `@models/oir_flatmount_segmentation/model_transformer.py`:
- Around line 12-14: The code extracts channel counts by iterating over
feat_info; replace that with the feature_info.channels property: after creating
self.backbone (timm.create_model with features_only=True) use
self.backbone.feature_info.channels and assign it to self.channels instead of
doing [s['num_chs'] for s in feat_info]; update the code in model_transformer.py
where self.backbone and feat_info are handled to use
self.backbone.feature_info.channels for clarity and API compatibility.
---
Nitpick comments:
In `@models/oir_flatmount_segmentation/configs/metadata.json`:
- Around line 45-51: The metadata field "optional_packages_version" contains
empty strings for packages (timm, albumentations, opencv-python, pandas,
matplotlib, openpyxl); update that object in metadata.json to pin explicit,
compatible version constraints (e.g. exact versions or narrow semver ranges like
== or ~=) for each package to ensure reproducible installs—edit the
"optional_packages_version" entry and set a concrete version string for "timm",
"albumentations", "opencv-python", "pandas", "matplotlib", and "openpyxl"
consistent with the project's tested environment.
In `@models/oir_flatmount_segmentation/model_transformer.py`:
- Around line 92-118: Extract the common encoder→decoder sequence from forward
and forward_with_aux into a single helper (e.g., _encode_decode or compute_y0)
that takes input x and returns the decoded feature y0 (or the per-head upsampled
logits if you prefer), by reusing self.enc, self.proj1..4, self.dec3/2/1
(up/att/conv) and self.dec0; then have forward and forward_with_aux call that
helper and only perform their remaining unique steps (e.g., creating upsampled
heads with self.retina_head, self.nv_head, self.vo_head and adding auxiliary
outputs in forward_with_aux). Ensure you reference and reuse the existing
symbols: forward, forward_with_aux, enc, proj1/proj2/proj3/proj4,
dec3/dec2/dec1/dec0, retina_head, nv_head, vo_head so only the duplicated
encoder-decoder logic is moved to the helper.
In `@models/oir_flatmount_segmentation/model.py`:
- Around line 196-204: In forward_with_aux, extract the repeated size tuple
(x.shape[2], x.shape[3]) into a local variable (e.g., target_size) and replace
all occurrences in the F.interpolate calls for ds2_* and ds3_* (ds2_r, ds2_nv,
ds2_vo, ds3_r, ds3_nv, ds3_vo) to use that variable; this reduces duplication
and improves readability while leaving the existing heads (retina_ds2_head,
nv_ds2_head, vo_ds2_head, retina_ds3_head, nv_ds3_head, vo_ds3_head) and the
torch.cat logic unchanged.
- Around line 15-20: In the ConvBNReLU.forward implementation remove the
unnecessary .contiguous() calls on the input tensor and the convolution output:
drop the .contiguous() on x before self.conv and the .contiguous() immediately
after self.conv so forward simply performs x = self.conv(x); x = self.bn(x); x =
self.relu(x); return x; retain .contiguous() usage only for places that actually
require it (e.g., after torch.cat operations elsewhere in the model) to avoid
added overhead.
In `@models/oir_flatmount_segmentation/README.md`:
- Line 3: Update the markdown heading levels so they increment by one: change
the "### **Authors**" heading to "## **Authors**" and change the "####
**Preprocessing**" heading to "### **Preprocessing**" (it sits under the "##
**Data**" section), ensuring both headings follow a single-level increment for
proper MD001 compliance and correct document structure.
In `@models/oir_flatmount_segmentation/requirements.txt`:
- Around line 1-9: The requirements.txt currently lists unpinned packages
(monai, torch, timm, numpy, pandas, opencv-python, matplotlib, albumentations,
openpyxl) which hurts reproducibility; update the file to add version
constraints (at least pin major versions) for monai, torch, and timm to ensure
model loading and backbone API stability and pin the other libraries as well;
use a consistent scheme such as ">=<current_major>.*,<next_major>" or exact
minor pins determined from a known-working environment and regenerate or lock
the file so installs reproduce the tested setup.
In `@models/oir_flatmount_segmentation/scripts/plot_learning_curves.py`:
- Around line 39-44: The loop over histories currently plots train_loss,
val_loss, val_dice_nv and val_dice_vo but omits the retina Dice; update the
plotting loop to also plot df["val_dice_retina"] (or the correct column name if
different) with a label f"fold_{fold}" on an available axes slot (e.g., add a
third subplot or replace one of the axis positions by expanding axes to 2x3 or
add axes[1,2]) so that the retina segmentation head's Dice is visualized
alongside nv and vo; ensure the legend and axis titles are updated accordingly
to include the retina metric and use the same alpha/label conventions as the
other plots.
- Around line 14-23: The _load_histories function currently loops over a
hardcoded range(5); make it flexible by auto-detecting fold subdirectories or
accepting a fold_count parameter: inspect kfold_dir for directories matching
"fold_*" (or list entries and filter for os.path.isdir and name pattern) and
iterate over those discovered fold names to build csv_path and read
training_history.csv, preserving the existing checks (os.path.exists, not
df.empty) and adding the fold identifier from the directory name; update related
callers if you add an argument to _load_histories.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: db69527f-5b79-40d9-ae33-a9ebdcceecaa
⛔ Files ignored due to path filters (1)
models/oir_flatmount_segmentation/cv_summary.csvis excluded by!**/*.csv
📒 Files selected for processing (20)
models/oir_flatmount_segmentation/LICENSEmodels/oir_flatmount_segmentation/README.mdmodels/oir_flatmount_segmentation/configs/inference.jsonmodels/oir_flatmount_segmentation/configs/metadata.jsonmodels/oir_flatmount_segmentation/configs/train.jsonmodels/oir_flatmount_segmentation/docs/POST_TRAINING_CHECKLIST.mdmodels/oir_flatmount_segmentation/docs/README.mdmodels/oir_flatmount_segmentation/docs/RELEASE_RESULTS.mdmodels/oir_flatmount_segmentation/docs/data_license.txtmodels/oir_flatmount_segmentation/large_files.ymlmodels/oir_flatmount_segmentation/model.pymodels/oir_flatmount_segmentation/model_transformer.pymodels/oir_flatmount_segmentation/release_manifest.jsonmodels/oir_flatmount_segmentation/requirements.txtmodels/oir_flatmount_segmentation/scripts/plot_learning_curves.pymodels/oir_flatmount_segmentation/weights/fold_0/thresholds.jsonmodels/oir_flatmount_segmentation/weights/fold_1/thresholds.jsonmodels/oir_flatmount_segmentation/weights/fold_2/thresholds.jsonmodels/oir_flatmount_segmentation/weights/fold_3/thresholds.jsonmodels/oir_flatmount_segmentation/weights/fold_4/thresholds.json
models/oir_flatmount_segmentation/docs/POST_TRAINING_CHECKLIST.md
Outdated
Show resolved
Hide resolved
models/oir_flatmount_segmentation/docs/POST_TRAINING_CHECKLIST.md
Outdated
Show resolved
Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
for more information, see https://pre-commit.ci
|
Hi @hartnettlabteam thanks for this submission. Please do have a look at the comments from Coderabbit, we do want to not have paths specific to users or systems, so relative paths where possible which the user can replace for themselves to point to their own files. You do also need a data license file in place of the notes that are present. Note also that Torchscript is deprecated now so it's less of a priority to include the weights in that format, but weights should be uploaded somewhere and referenced in the |
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
for more information, see https://pre-commit.ci
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
Thanks @ericspod, sorry for the late response - we've made these changes |
Signed-off-by: hartnettlabteam <hartnettlab.team@gmail.com>
for more information, see https://pre-commit.ci
Summary
This PR adds a new MONAI bundle:
oir_flatmount_segmentation.The bundle provides segmentation for:
for mouse and rat OIR retinal flatmount images.
Large files
Model checkpoint files are not committed in this PR.
Large files are provided via
large_files.yml.External model asset links
Summary by CodeRabbit
New Features
Documentation