Add FloodForecaster: Domain-Adaptive GINO Framework for Flood Forecasting#1271
Add FloodForecaster: Domain-Adaptive GINO Framework for Flood Forecasting#1271MehdiTaghizadehUVa wants to merge 59 commits intoNVIDIA:mainfrom
Conversation
- Moved sample_animation.gif to docs/img - Removed functions using pickle due to security concerns - Removed duplicate import of KolmogorovArnoldNetwork in __init__.py
|
|
||
| ```yaml | ||
| model: | ||
| model_arch: 'gino' |
There was a problem hiding this comment.
Is this actually configurable?
|
|
||
| For questions, feedback, or collaborations: | ||
|
|
||
| - **Mehdi Taghizadeh** – <jrj6wm@virginia.edu> |
There was a problem hiding this comment.
Please mention that you are the code contributor and maintainer.
| - **Jonathan L. Goodall** – <jlg7h@virginia.edu> | ||
| - **Negin Alemazkoor** (Corresponding Author) – <na7fp@virginia.edu> | ||
|
|
||
| ## License |
There was a problem hiding this comment.
You don't need this section. Please remove it.
| @@ -0,0 +1,338 @@ | |||
| # FloodForecaster: A Domain-Adaptive Geometry-Informed Neural Operator Framework for Rapid Flood Forecasting | |||
There was a problem hiding this comment.
Thanks for the comprehensive and very well structured readme!
| wireup_info: 'mpi' | ||
| wireup_store: 'tcp' |
There was a problem hiding this comment.
Would you be able to use physicsnemo distributed utilities instead?
|
|
||
| # Load state dict after closing archive | ||
| model_dict = torch.load(io.BytesIO(model_bytes), map_location=device) | ||
| model_dict = torch.load(io.BytesIO(model_bytes), map_location=device, weights_only=False) |
There was a problem hiding this comment.
That's BC breaking, we can't accept this change
| wireup_store: 'tcp' | ||
| model_parallel_size: 1 | ||
| seed: 123 | ||
| device: 'cuda:0' |
There was a problem hiding this comment.
If we set use_distributed=True, how should the device filed be specified? Would it be a list of devices? Please clarify. It would be nice to add some details about distributed computing configs in the readme.
| resume_from_dir: null | ||
| resume_from_source: null | ||
| resume_from_adapt: null |
There was a problem hiding this comment.
Are these needed? If som please describe how these should be used.
| n_epochs_adapt: 2 | ||
| learning_rate: 1e-4 | ||
| adapt_learning_rate: 1e-4 | ||
| training_loss: 'l2' |
There was a problem hiding this comment.
List the available options in a comment
| training_loss: 'l2' | ||
| testing_loss: 'l2' | ||
| weight_decay: 1e-4 | ||
| amp_autocast: false |
There was a problem hiding this comment.
List the available options in a comment
|
|
||
| raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") | ||
|
|
||
| def save_checkpoint(self, save_folder: str, save_name: str) -> None: |
There was a problem hiding this comment.
I'm curious, how is GiNO's save and load checkpoint methods different from torch's save/load utilities? Couldn't you use the physicsnemo save/load utilities for better integration?
…nfig-name are defaults)
… update distributed docs
… coverage - Rename trainer_physicsnemo.py to trainer.py and update all references - Update GINOWrapper.load_state_dict to handle PhysicsNeMo checkpoint format - Fix checkpoint loading in inference.py to support both PhysicsNeMo and neuralop formats - Add comprehensive unit tests for data processing, training, trainer, and inference - Reduce test suite size to align with other PhysicsNeMo examples - Remove test coverage analysis markdown file - Fix various test failures related to model structure and checkpoint loading
|
Hi @mnabian, All review comments are addressed:
All changes are committed and ready for review. |
…tion - Add tqdm progress bars for epoch, batch, and evaluation loops - Fix autoregressive skip connection to handle correct tensor shapes (B, out_channels, n_out) -> (B, n_out, out_channels) - Add autoregressive config option to model config - Update pretraining and inference to handle autoregressive parameter correctly
Align the GINOWrapper latent path with native neuralop GINO semantics so mesh-to-grid projection uses latent query points, not mesh coordinates. Also filter loss-only fields from trainer and domain-adaptation model forwards and add regressions covering latent-grid shape handling and target-key filtering.
PhysicsNeMo Pull Request
Description
This PR integrates FloodForecaster, a domain-adaptive Geometry-Informed Neural Operator (GINO) framework for rapid, high-resolution flood forecasting. The framework enables accurate, real-time flood predictions by learning from source domain data and adapting to target domains through adversarial training.
Key Features
physicsnemo.Modulewith checkpointing supportComponents Added
Checklist
Dependencies
No new dependencies required. All packages are either already in PhysicsNeMo or standard scientific computing libraries:
neuralop>=2.0.0(existing)hydra-core>=1.2.0(existing)wandb>=0.12.0(optional, for logging)matplotlib,tqdm,numpy,torch,omegaconf,pandas,h5pyReview Process
All PRs are reviewed by the PhysicsNeMo team before merging.
Depending on which files are changed, GitHub may automatically assign a maintainer for review.
We are also testing AI-based code review tools (e.g., Greptile), which may add automated comments with a confidence score.
This score reflects the AI’s assessment of merge readiness and is not a qualitative judgment of your work, nor is
it an indication that the PR will be accepted / rejected.
AI-generated feedback should be reviewed critically for usefulness.
You are not required to respond to every AI comment, but they are intended to help both authors and reviewers.
Please react to Greptile comments with 👍 or 👎 to provide feedback on their accuracy.