Switch default MPI to OpenMPI on macOS (fixes #68)#75
Conversation
MPICH 4.x has a collective-operation performance bug on macOS Apple Silicon
that causes catastrophic scaling regression (31x slowdown at 6 procs).
OpenMPI does not have this issue.
- pixi.toml: platform-conditional MPI (OpenMPI on macOS, MPICH on Linux)
with explicit override environments for non-default MPI choice
- build-petsc.sh: auto-detect MPI, use petsc-4-uw-{openmpi,mpich} arch
so both builds co-exist under the same PETSc source tree
- uw: setup wizard includes MPI question with platform-appropriate defaults,
petsc_built() checks correct arch, doctor display fix for import crashes
- Add OMP_NUM_THREADS=1 to prevent OpenBLAS thread oversubscription
Underworld development team with AI support from Claude Code
There was a problem hiding this comment.
Pull request overview
This PR updates Underworld3’s in-repo build tooling to default to OpenMPI on macOS (while keeping MPICH on Linux), and adds explicit “override” environments so both MPI stacks can be selected and built side-by-side (including separate PETSc arches).
Changes:
- Updates
pixi.tomlto use platform-conditional MPI defaults (OpenMPI on macOS, MPICH on Linux) and adds explicit override environments/features. - Enhances
petsc-custom/build-petsc.shto auto-detect MPI from the active pixi environment and build into MPI-specificPETSC_ARCHdirectories, plus addsclean-all. - Extends
uwsetup/status/help flows to surface MPI choice and to detect PETSc builds per-environment.
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
uw |
Adds PETSc-arch selection by env, adds MPI choice to setup wizard, expands status/help output for override envs, and fixes doctor import-status handling. |
pixi.toml |
Introduces platform-default MPI selection and explicit MPI override features/environments; caps BLAS/OpenMP threads via activation env. |
petsc-custom/build-petsc.sh |
Auto-detects MPI to set PETSC_ARCH, supports dual MPICH/OpenMPI builds in one source tree, and adds clean-all. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| default) desc="minimal ($default_mpi)" ;; | ||
| runtime) desc="+ viz, jupyter ($default_mpi)" ;; | ||
| dev) desc="+ claude, linting ($default_mpi)" ;; | ||
| mpich) desc="minimal (mpich override)" ;; | ||
| mpich-dev) desc="+ claude, linting (mpich override)" ;; | ||
| openmpi) desc="minimal (openmpi override)" ;; | ||
| openmpi-dev) desc="+ claude, linting (openmpi override)" ;; |
There was a problem hiding this comment.
show_env_status() descriptions for dev / *-dev environments omit that these envs also include the runtime feature (viz + Jupyter) per pixi.toml. This can mislead users comparing environments; consider reflecting that *-dev includes runtime+dev (or similar) so the status output matches the actual feature sets.
| # --- Explicit MPICH (override for macOS, or when MPICH is required) --- | ||
| mpich = { features = ["conda-petsc-mpich"], solve-group = "mpich" } | ||
| mpich-dev = { features = ["conda-petsc-mpich", "runtime", "dev"], solve-group = "mpich" } | ||
|
|
||
| # --- AMR Track (custom PETSc with mesh adaptation) --- | ||
| # AMR Minimal: Build custom PETSc, then underworld3 | ||
| amr = { features = ["amr"], solve-group = "amr" } | ||
| amr-mpich = { features = ["amr-mpich"], solve-group = "amr-mpich" } | ||
| amr-mpich-dev = { features = ["amr-mpich", "runtime", "dev"], solve-group = "amr-mpich" } | ||
|
|
||
| # AMR Runtime: For tutorials with adaptive mesh | ||
| amr-runtime = { features = ["amr", "runtime"], solve-group = "amr" } | ||
| # --- Explicit OpenMPI (override for Linux clusters, or when OpenMPI is required) --- | ||
| openmpi = { features = ["conda-petsc-openmpi"], solve-group = "openmpi" } | ||
| openmpi-dev = { features = ["conda-petsc-openmpi", "runtime", "dev"], solve-group = "openmpi" } | ||
|
|
||
| # AMR Dev: Full development with custom PETSc | ||
| amr-dev = { features = ["amr", "runtime", "dev"], solve-group = "amr" } | ||
| amr-openmpi = { features = ["amr-openmpi"], solve-group = "amr-openmpi" } | ||
| amr-openmpi-dev = { features = ["amr-openmpi", "runtime", "dev"], solve-group = "amr-openmpi" } |
There was a problem hiding this comment.
The environment list defines openmpi/openmpi-dev and amr-openmpi/amr-openmpi-dev, but there are no runtime override envs (e.g. openmpi-runtime, amr-openmpi-runtime). Since ./uw setup offers a Runtime tier and ./uw set-env help currently lists *-runtime overrides, it would be clearer to either add the missing runtime override environments here or adjust the wizard/help text to match the actual env set.
| echo "Conda-forge PETSc:" | ||
| echo " MPICH: default, runtime, dev" | ||
| echo " OpenMPI: openmpi, openmpi-runtime, openmpi-dev" | ||
| echo "" | ||
| echo "Custom PETSc (AMR):" | ||
| echo " MPICH: amr, amr-runtime, amr-dev" | ||
| echo " OpenMPI: amr-openmpi, amr-openmpi-runtime, amr-openmpi-dev" | ||
| echo "" |
There was a problem hiding this comment.
The set-env help text advertises environments (openmpi-runtime, amr-openmpi-runtime) that are not defined in pixi.toml (only openmpi/openmpi-dev and amr-openmpi/amr-openmpi-dev exist). This will send users to invalid env names; either add the missing runtime override environments in pixi.toml or update this help output to only list real options.
| case "$feature_choice" in | ||
| 1) new_env="amr-${other_mpi}" ;; | ||
| 3) new_env="amr-${other_mpi}-dev" ;; | ||
| *) new_env="amr-${other_mpi}-dev" ;; # no runtime override; use dev | ||
| esac |
There was a problem hiding this comment.
In the MPI override path for AMR, the Runtime tier (feature_choice = 2) is mapped to amr-${other_mpi}-dev with a comment saying there is no runtime override. This means a user explicitly choosing “Runtime” will silently get the heavier “Dev” environment (extra tooling) instead of the requested feature tier. Consider adding amr-<mpi>-runtime environments (and selecting them here) or adjusting the wizard to not offer the Runtime option when MPI override envs don’t exist.
| case "$feature_choice" in | ||
| 1) new_env="${other_mpi}" ;; | ||
| 3) new_env="${other_mpi}-dev" ;; | ||
| *) new_env="${other_mpi}-dev" ;; # no runtime override; use dev | ||
| esac |
There was a problem hiding this comment.
In the non-AMR MPI override path, the Runtime tier (feature_choice = 2) is mapped to ${other_mpi}-dev because there is no runtime override env. This overrides the user’s feature-tier selection and pulls in dev-only dependencies unexpectedly. Either add <mpi>-runtime environments in pixi.toml or change the wizard flow so the Runtime tier remains Runtime (or is not offered) when selecting a non-default MPI.
| │ default │ conda-forge │ minimal │ | ||
| │ runtime │ conda-forge │ + pyvista, jupyter│ | ||
| │ dev │ conda-forge │ + claude, black │ | ||
| ├─────────────┼──────────────────┼───────────────────┤ |
There was a problem hiding this comment.
The help table says the dev environment is “+ claude, black”, but in pixi.toml the dev environment includes the runtime feature as well (features = ["conda-petsc", "runtime", "dev"]), so it also includes viz/Jupyter dependencies. Updating the table wording (and similarly for amr-dev) would keep the CLI help aligned with the actual environment contents.
Complements the OpenMPI switch (#75) by capping all BLAS/LAPACK thread pool variables (MKL_NUM_THREADS, OPENBLAS_NUM_THREADS, VECLIB_MAXIMUM_THREADS, NUMEXPR_MAX_THREADS) at runtime in addition to OMP_NUM_THREADS. This prevents MPI+BLAS oversubscription which causes performance degradation on multi-core systems. The caps are applied at module import time in __init__.py, ensuring they take effect regardless of how underworld3 is launched. Underworld development team with AI support from Claude Code



Summary
Changes
Test plan
Underworld development team with AI support from Claude Code