Skip to content

Py HiPO#2756

Closed
galabovaa wants to merge 74 commits intolatestfrom
py-l
Closed

Py HiPO#2756
galabovaa wants to merge 74 commits intolatestfrom
py-l

Conversation

@galabovaa
Copy link
Copy Markdown
Contributor

@galabovaa galabovaa commented Jan 17, 2026

Not ready to merge

Just for reference for @mathgeekcoder

This is what I had, with some modifications, so highs with hipo is built with

 python -m pip install . `
             --config-settings=cmake.define.HIPO=ON `
             --config-settings=cmake.define.BUILD_OPENBLAS=ON

@MaykThewessen
Copy link
Copy Markdown

Successfully built highspy with HiPO on macOS ARM64 (M1 Pro) using the latest branch

We used this PR as a reference to build highspy with HiPO support from the current latest branch (Python 3.14, macOS 15.5, Apple M1 Pro). Sharing our experience in case it helps:

What worked:

  • The approach from this PR (patching CMakeLists.txt + adding HiPO sources to cmake/sources-python.cmake and cmake/python-highs.cmake) works correctly.
  • On macOS, Apple Accelerate provides BLAS — no need for BUILD_OPENBLAS.
  • Build command: pip install . --config-settings=cmake.define.HIPO=ON
  • HiPO solver produces identical results to IPX on test LPs (same iteration count, same objective).

What needed updating for latest branch:
The source file lists in this PR are outdated vs the current latest branch. Two files were added since this PR was created:

  • highs/ipm/hipo/ipm/PreProcess.cpp / PreProcess.h (missing → dlopen fails with _ZN4hipo12Preprocessor5applyERNS_5ModelE)
  • highs/ipm/hipo/auxiliary/OrderingPrint.h

Adding these to hipo_sources_python / hipo_headers_python / hipo_util_headers_python in cmake/sources-python.cmake resolves the link errors.

Use case: We're running full-year PyPSA-Eur DC-OPF simulations with ~317k variables per weekly chunk, solving 52 chunks sequentially with HiGHS IPM. Interested in HiPO to eliminate the "discarding crash basis" overhead and leverage parallel factorization for these production-scale power system optimizations. Looking forward to this landing in a release!

@jajhall
Copy link
Copy Markdown
Member

jajhall commented Mar 12, 2026

Thanks for the feedback on your successful build. I think that this will be familiar to @galabovaa as she works on a universal highspy build with HiPO, but I've tagged her just in case.

@MaykThewessen
Copy link
Copy Markdown

macOS ARM64 Build Report & Missing Dependencies

We've been building HiPO-enabled highspy from source on macOS ARM64 (Apple Silicon) for our PyPSA DC-OPF production pipeline. Sharing our patches since they may be useful for this PR.

Build Environment

  • macOS 15.3, Apple M-series
  • Python 3.12 (pixi/conda)
  • BLAS: Apple Accelerate (auto-detected)
  • METIS/AMD: vendored from extern/

Patches Required (on top of latest branch)

1. Remove Python block (CMakeLists.txt):
The if (HIPO AND PYTHON_BUILD_SETUP) error block must be removed/commented.

2. Add HiPO sources to Python build (cmake/sources-python.cmake):

set(hipo_sources
    hipo/ipm/Solver.cpp hipo/ipm/Control.cpp hipo/ipm/MaxVolume.cpp
    hipo/ipm/Iterate.cpp hipo/ipm/Newton.cpp hipo/ipm/KKTDiagSolver.cpp
    hipo/ipm/AugmentedSolver.cpp hipo/ipm/NormalSolver.cpp
    hipo/ipm/FactorisedSolver.cpp hipo/ipm/Residuals.cpp
    hipo/ipm/Crossover.cpp hipo/ipm/StepCalculator.cpp
    hipo/factorise/FactoriseAnalyse.cpp
    hipo/factorise/FactoriseAssemble.cpp
    hipo/factorise/FactoriseFactorise.cpp
    hipo/factorise/FactoriseSolve.cpp
    hipo/factorise/FactoriseMetis.cpp
    hipo/factorise/FactoriseAMD.cpp
    hipo/factorise/FactoriseRCM.cpp
    hipo/factorise/DependencyGraph.cpp
)
list(TRANSFORM hipo_sources PREPEND "highs/ipm/")
set(highspy_sources ${highspy_sources} ${hipo_sources})

3. Missing files not in PR #2756's lists:

  • highs/presolve/PreProcess.cpp — needed by HiPO, not included in PR's source lists
  • highs/ipm/hipo/factorise/OrderingPrint.h — header included by factorise code but missing from install targets

Build Command

pip install /path/to/HiGHS \
    --config-settings=cmake.define.HIPO=ON \
    --force-reinstall

Result

>>> import highspy
>>> h = highspy.Highs()
>>> h.setOptionValue('solver', 'hipo')  # Accepted without error

However, in our benchmarks HiPO reports simplex-pami as the actual backend, suggesting it may fall back to simplex on our LP problems (168h DC-OPF, ~600k constraints, run_crossover: 'off'). This could be a configuration or problem-type issue rather than a build issue.

BLAS Detection Note

On macOS, CMake's FindBLAS finds Accelerate but the HiGHS CMakeLists.txt checks for LAPACK in the BLAS library name for some code paths. Accelerate provides LAPACK but the library name doesn't match the string check. This doesn't block the build but may affect some optimized paths.

@MaykThewessen
Copy link
Copy Markdown

Update: HiPO Benchmark Results (corrected build)

Following up on our previous reports — we identified and fixed an issue with our build where HiPO was silently falling back to simplex-pami. The root cause was missing --no-build-isolation and FAST_BUILD=ON in our pip install command, which produced a binary without HiPO actually compiled in. HiGHS accepted setOptionValue('solver', 'hipo') without error but internally reverted to choose → simplex.

Corrected build command

pip install /path/to/HiGHS \
    --config-settings=cmake.define.HIPO=ON \
    --config-settings=cmake.define.FAST_BUILD=ON \
    --no-build-isolation \
    --force-reinstall

Benchmark results — DC-OPF (PyPSA, Netherlands grid, ~100 buses, 10 threads, macOS M1 Pro)

Solver 168h (7d) 336h (14d) 672h (28d) Scaling 168→672h Backend confirmed
simplex (PyPI) 38.8s 158.2s 559.5s 14.4× simplex-pami
ipm/IPX (PyPI) 51.3s 197.4s 748.0s 14.6× ipx
hipo (source) 68.0s 287.2s 912.4s 13.4× hipo

Key observations

  1. HiPO is genuinely running now — backend reports hipo instead of simplex-pami
  2. HiPO is ~1.7× slower than simplex on our LP problems (~100k variables, ~600k constraints per 168h chunk). The parallel supernodal factorization overhead doesn't pay off at this problem size.
  3. HiPO scales slightly better than IPX (13.4× vs 14.6× from 168h→672h), suggesting the parallel factorization becomes more valuable as problem size grows.
  4. IPX shows zero thread scaling — identical times with 1 thread vs 10 threads (~51s). The sequential factorization dominates.
  5. Simplex (PAMI) is fastest at all sizes, with good parallel utilization (~588% CPU on 10 cores).

Cost-of-energy consistency

  • PyPI builds (simplex, IPX): 15.19 EUR/MWh (168h), 21.43 (336h), 17.98 (672h)
  • Source build (HiPO): 15.75 EUR/MWh (168h), 22.30 (336h) — slight numerical difference, likely due to different convergence paths or tolerance handling

Silent fallback detection

One thing worth noting for other users: when HiPO is not compiled in, setOptionValue('solver', 'hipo') does not raise an exception. It prints to stderr and silently reverts to choose. We detect this by checking stderr for "compiled without HiPO" after setting the option. A proper error or at least a return status would be helpful here.

Thanks again for the work on HiPO and this PR!

@MaykThewessen
Copy link
Copy Markdown

Addendum: Simplex (PAMI) thread scaling on same DC-OPF workload (168h chunk)

Threads HiGHS time Speedup CPU utilization
1 107.9s 1.0× ~100%
4 85.4s 1.26× ~350%
10 38.8s 2.78× ~555%

Simplex shows real but sub-linear scaling — 2.78× on 10 cores. For comparison, IPX shows essentially zero thread scaling (51.3s at both 1t and 10t), and HiPO at 10t is 68.0s.

@jajhall
Copy link
Copy Markdown
Member

jajhall commented Mar 12, 2026

Thanks for your observations. It's interesting to see you getting good speedup with Simplex (PAMI) - yes 2.78x on 10 threads is good!

Your observations on HiPO would appear to resonate with the results of experiments that @filikat showed me this morning. Whilst HiPO offers great performance gain on large, structured models, it didn't on some exceptionally sparse problems from PyPSA. Whilst not all PyPSA models have the same characteristics, it's quite a coincidence. @filikat is developing a simplified factorization strategy for such problems.

@galabovaa
Copy link
Copy Markdown
Contributor Author

Closing this, a larger re-structure is in progress

@galabovaa galabovaa closed this Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants