Skip to content

Conversation

@breyerml
Copy link
Member

@breyerml breyerml commented Mar 27, 2025

Add support for distributed memory system via MPI to all backends + Python bindings.
Currently, only rudimentary support is provided, i.e., the data set is read by all MPI ranks, and all MPI ranks store the full data matrix on the device. Only the kernel matrix assembly + BLAS operations are distributed.
Wraps all calls to MPI such that PLSSVM can still be built if MPI is not found.
Also implements a rudimentary load-balancing scheme for MPI.

Additionally, greatly improves the Python bindings API exposed on the Python side.

…all MPI_Abort (on MPI_COMM_WORLD) if necessary.
…., each MPI rank does the classification all for itself!
# Conflicts:
#	CMakeLists.txt
#	README.md
breyerml added 28 commits April 24, 2025 23:15
…-math to remove a UB problem due to fast_float's internal usage of std::numeric_limits<>::infinity().
… space. If more than one device is found, a warning is printed and only the first device is used. Multi-GPU support can be enabled using the CMake configuration option PLSSVM_KOKKOS_BACKEND_SYCL_ENABLE_MULTI_GPU.
@breyerml breyerml merged commit 4e6ea04 into develop Apr 29, 2025
11 of 13 checks passed
@breyerml breyerml deleted the mpi branch April 29, 2025 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants