Skip to content

Integrate PSLP presolver in cuOpt#816

Open
rg20 wants to merge 17 commits intoNVIDIA:release/26.02from
rg20:integrate_pslp
Open

Integrate PSLP presolver in cuOpt#816
rg20 wants to merge 17 commits intoNVIDIA:release/26.02from
rg20:integrate_pslp

Conversation

@rg20
Copy link
Contributor

@rg20 rg20 commented Jan 29, 2026

Description

This PR adds support for using PSLP presolver for LPs. This is enabled by --presolve 2. For large LP problems (>1M constraints or >1M variables) this presolver is the default.

Issue

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

Summary by CodeRabbit

Release Notes

  • New Features

    • Added PSLP presolver as an alternative preprocessing option alongside the existing Papilo presolver.
    • Introduced presolver selection with four options: Default (automatic), None (disabled), Papilo, or PSLP for enhanced control over preprocessing strategy.
  • Refactor

    • Replaced boolean presolve flag with enum-based presolver in solver settings for more granular preprocessing control.

@rg20 rg20 requested review from a team as code owners January 29, 2026 15:53
@rg20 rg20 requested review from Iroy30 and gforsyth January 29, 2026 15:53
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 29, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@rg20 rg20 requested review from akifcorduk and chris-maes January 29, 2026 15:53
@rg20 rg20 added feature request New feature or request non-breaking Introduces a non-breaking change labels Jan 29, 2026
@rg20 rg20 added this to the 26.02 milestone Jan 29, 2026
@coderabbitai
Copy link

coderabbitai bot commented Jan 29, 2026

📝 Walkthrough

Walkthrough

This PR introduces PSLP as a new presolver option alongside existing Papilo support. It replaces the boolean presolve flag with a presolver_t enum throughout the codebase and integrates PSLP via FetchContent. Additionally, it includes algorithmic optimizations to the folding algorithm using Paige-Tarjan partitioning and a bug fix in barrier histogram calculations.

Changes

Cohort / File(s) Summary
PSLP Integration & Build System
cpp/CMakeLists.txt, cpp/include/cuopt/linear_programming/constants.h, cpp/tests/CMakeLists.txt
Added PSLP as a FetchContent dependency (v0.0.4), exposed presolver macro constants (Default, Off, Papilo, PSLP), and updated test include paths for PSLP/Papilo headers.
Presolver Type Definition & Settings API
cpp/include/cuopt/linear_programming/utilities/internals.hpp, cpp/include/cuopt/linear_programming/mip/solver_settings.hpp, cpp/include/cuopt/linear_programming/pdlp/solver_settings.hpp
Introduced presolver_t enum with four options mapped to macro constants; replaced boolean presolve field with presolver enum across MIP and PDLP solver settings.
Presolve Implementation with PSLP
cpp/src/mip/presolve/third_party_presolve.hpp, cpp/src/mip/presolve/third_party_presolve.cpp
Integrated PSLP presolver with build/run/undo scaffolding; refactored Papilo problem construction to accept maximize parameter; added apply_pslp() entry point and routing logic; extended destructor for resource cleanup.
LP/MIP Solver Core Logic
cpp/src/linear_programming/solve.cu, cpp/src/mip/solve.cu, cpp/src/mip/solver.cu, cpp/src/mip/diversity/diversity_manager.cu, cpp/src/math_optimization/solver_settings.cu
Updated presolve decision logic to check presolver != None instead of boolean flag; added presolver routing in apply() calls; mapped presolver parameter to int_parameters for external configuration; normalized presolver defaults in solve paths.
Dual Simplex Optimizations
cpp/src/dual_simplex/barrier.cu, cpp/src/dual_simplex/folding.cpp
Fixed histogram iteration and dense-row counting logic in barrier; implemented Paige-Tarjan optimization for color splitting, refactored containers to use ordered map, added comprehensive debug/verification blocks for matrix products and partition checks.
Test Updates
cpp/tests/linear_programming/pdlp_test.cu, cpp/tests/mip/doc_example_test.cu, cpp/tests/mip/integer_with_real_bounds.cu, cpp/tests/mip/mip_utils.cuh, cpp/tests/mip/presolve_test.cu, cpp/tests/mip/unit_test.cu, cpp/tests/mip/incumbent_callback_test.cu, cpp/tests/mip/cuts_test.cu
Systematically replaced boolean presolve flag usage with presolver_t enum values (Papilo, None) across LP and MIP test cases; updated function signatures and test invocations to pass presolver instead of boolean.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.70% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Integrate PSLP presolver in cuOpt' is concise, specific, and accurately reflects the main change in the changeset—adding PSLP presolver support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/tests/CMakeLists.txt (1)

40-46: Remove the hard-coded PSLP include path.

The absolute /home/nfs/... path will break builds on any other machine/CI. Please use the PSLP target’s include dirs or FetchContent variables.

🔧 Suggested fix (target-based include)
-        "/home/nfs/rgandham/git-repos/PSLP/include/PSLP"
+        "$<TARGET_PROPERTY:PSLP,INTERFACE_INCLUDE_DIRECTORIES>"
cpp/src/mip/presolve/third_party_presolve.cpp (1)

629-691: Add null check and postsolve error handling in undo_pslp.

The undo_pslp method dereferences pslp_presolver_ without validation (lines 682, 689–691) and doesn't check the postsolve() result. In contrast, the Papilo postsolve path explicitly checks status via check_postsolve_status() (line 655). Add a guard assertion and status check to match the error handling pattern used for Papilo:

🧯 Guard against uninitialized PSLP state
 void third_party_presolve_t<i_t, f_t>::undo_pslp(rmm::device_uvector<f_t>& primal_solution,
                                                  rmm::device_uvector<f_t>& dual_solution,
                                                  rmm::device_uvector<f_t>& reduced_costs,
                                                  rmm::cuda_stream_view stream_view)
 {
+  cuopt_expects(pslp_presolver_ != nullptr,
+                error_type_t::RuntimeError,
+                "PSLP presolver not initialized");
   std::vector<f_t> h_primal_solution(primal_solution.size());
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/barrier.cu`:
- Around line 1155-1158: The histogram output loop omits the last bin (nnz == n)
because histogram_row has size n+1 but the for loop uses "for (i_t k = 0; k < n;
k++)"; update the loop that prints histogram_row (the block using
settings.log.printf and the loop variable k) to iterate through all n+1 bins
(e.g., change the loop condition to include k == n) so entries where row_nz == n
are printed.

In `@cpp/src/math_optimization/solver_settings.cu`:
- Around line 91-93: The CUOPT_PRESOLVE parameter is now treated as an int but
callers (e.g., set_parameter_from_string or boolean setters) may still pass
"true"/"false" or bools; update the parameter parsing/shim to accept
boolean-like values and map them to the int enums (e.g., DEFAULT/OFF) before
assigning to pdlp_settings.presolver and mip_settings.presolver, or add a branch
in the string-to-parameter logic that recognizes "true"/"false"/"1"/"0" and
converts them to the corresponding CUOPT_PRESOLVE_* constants so existing
boolean-based calls do not throw when setting CUOPT_PRESOLVE.

In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Around line 528-539: Before overwriting pslp_presolver_ and pslp_stgs_ in
third_party_presolve_t<i_t,f_t>::apply_pslp, check if they are non-null and
release their prior allocations (call the appropriate PSLP destructor/free
function or delete depending on how PSLP objects are managed), then set them to
the new ctx.presolver and ctx.settings; specifically, free the existing
pslp_presolver_ and pslp_stgs_ members before assigning new values to avoid
leaking prior PSLP resources when apply_pslp is called repeatedly.
- Line 8: Remove the unused include of boost/serialization/void_cast.hpp from
third_party_presolve.cpp: delete the line "#include
<boost/serialization/void_cast.hpp>" (ensure no other code in this file
references Boost Serialization or void_cast so the build remains clean).
- Around line 180-283: Replace the assert and ignored presolve result in
build_and_run_pslp_presolver with explicit runtime checks: change the function
to return std::optional<PSLPContext>, after calling new_presolver(...) check if
ctx.presolver is nullptr and return std::nullopt (cleaning up settings if
needed); then call run_presolver(ctx.presolver) and if the returned
PresolveStatus is not success (explicitly handle kInfeasible, kUnbndOrInfeas and
any non-success), free/delete ctx.presolver and return std::nullopt; only return
a populated PSLPContext when new_presolver succeeded and run_presolver returned
success, ensuring callers (e.g., code reading ctx.presolver->reduced_prob) won't
dereference an invalid presolver.

In `@cpp/src/mip/presolve/third_party_presolve.hpp`:
- Around line 58-76: The class currently owning raw pointers pslp_stgs_ and
pslp_presolver_ can be accidentally copied and cause double-free; update the
third_party_presolve_t class to be non-copyable/non-movable by explicitly
deleting the copy constructor, copy assignment operator, move constructor, and
move assignment operator (i.e., add declarations like
third_party_presolve_t(const third_party_presolve_t&) = delete; and similar for
operator= and move variants) so instances that own pslp_stgs_ and
pslp_presolver_ cannot be copied or moved; alternatively, replace those raw
pointers with RAII types (unique_ptr) if you prefer move-only semantics, but the
quick fix is to delete copy/move for third_party_presolve_t.

In `@cpp/src/mip/solve.cu`:
- Around line 150-153: The code in the mip_solver_settings_t constructor block
currently silently replaces settings.presolver when it's presolver_t::Default or
presolver_t::PSLP with presolver_t::Papilo; update this to either emit a clear
warning or return an error instead of silently overriding: detect when
settings.presolver == presolver_t::PSLP (and optionally when ==
presolver_t::Default), log a warning via the existing logging mechanism (or
propagate an error/exception) that "PSLP not supported for MIP, falling back to
Papilo" (or require user change), and only perform the assignment to
presolver_t::Papilo after logging/handling so the change is explicit; refer to
mip_solver_settings_t, settings.presolver and presolver_t::PSLP /
presolver_t::Papilo when locating and modifying the code.
🧹 Nitpick comments (2)
benchmarks/linear_programming/run_mps_files.sh (1)

417-417: Make tolerances overrideable (and verify CLI flag support).
Hard-coding 1.0e-6 across all runs limits benchmarking flexibility; consider env-based defaults so users can adjust without editing the script. Also please confirm these flags are supported by the cuopt_cli version used in CI to avoid runtime failures.

♻️ Example with env-defaults (single-line change)
-        CUDA_VISIBLE_DEVICES=$gpu_devices cuopt_cli "$mps_file" --time-limit $TIME_LIMIT $args --relative-primal-tolerance 1.0e-6 --relative-dual-tolerance 1.0e-6 --relative-gap-tolerance 1.0e-6 --absolute-primal-tolerance 1.0e-6 --absolute-dual-tolerance 1.0e-6 --absolute-gap-tolerance 1.0e-6
+        CUDA_VISIBLE_DEVICES=$gpu_devices cuopt_cli "$mps_file" --time-limit $TIME_LIMIT $args \
+          --relative-primal-tolerance ${RELATIVE_PRIMAL_TOLERANCE:-1.0e-6} \
+          --relative-dual-tolerance ${RELATIVE_DUAL_TOLERANCE:-1.0e-6} \
+          --relative-gap-tolerance ${RELATIVE_GAP_TOLERANCE:-1.0e-6} \
+          --absolute-primal-tolerance ${ABSOLUTE_PRIMAL_TOLERANCE:-1.0e-6} \
+          --absolute-dual-tolerance ${ABSOLUTE_DUAL_TOLERANCE:-1.0e-6} \
+          --absolute-gap-tolerance ${ABSOLUTE_GAP_TOLERANCE:-1.0e-6}
cpp/src/dual_simplex/folding.cpp (1)

224-280: Guard non‑finite sums before using them as std::map keys.

Line 247 and Line 279 insert floating sums into ordered maps; if any sum is NaN/Inf, std::map’s ordering is violated and keys can collapse, leading to incorrect splits. Consider a lightweight finiteness check before insertion.

🛠️ Suggested guard
   sum_to_sizes.clear();
   color_sums.clear();
   for (i_t v : vertices_to_refine_by_color[color]) {
-    sum_to_sizes[vertex_to_sum[v]]++;
+    const f_t sum = vertex_to_sum[v];
+    if (!std::isfinite(sum)) {
+      printf("Folding: non-finite sum for vertex %d\n", v);
+      exit(1);
+    }
+    sum_to_sizes[sum]++;
   }

As per coding guidelines: Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks.

Comment on lines 1155 to 1158
settings.log.printf("Row Nz # rows\n");
for (i_t k = 0; k < m; k++) {
for (i_t k = 0; k < n; k++) {
if (histogram_row[k] > 0) { settings.log.printf("%6d %6d\n", k, histogram_row[k]); }
}
Copy link

@coderabbitai coderabbitai bot Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Include the last histogram bin (nnz == n).

histogram_row is sized n + 1, but the loop stops at k < n, so rows with row_nz == n are never shown in HISTOGRAM output.

🛠️ Suggested fix
-    for (i_t k = 0; k < n; k++) {
+    for (i_t k = 0; k < static_cast<i_t>(histogram_row.size()); k++) {
       if (histogram_row[k] > 0) { settings.log.printf("%6d %6d\n", k, histogram_row[k]); }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
settings.log.printf("Row Nz # rows\n");
for (i_t k = 0; k < m; k++) {
for (i_t k = 0; k < n; k++) {
if (histogram_row[k] > 0) { settings.log.printf("%6d %6d\n", k, histogram_row[k]); }
}
settings.log.printf("Row Nz # rows\n");
for (i_t k = 0; k < static_cast<i_t>(histogram_row.size()); k++) {
if (histogram_row[k] > 0) { settings.log.printf("%6d %6d\n", k, histogram_row[k]); }
}
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/barrier.cu` around lines 1155 - 1158, The histogram
output loop omits the last bin (nnz == n) because histogram_row has size n+1 but
the for loop uses "for (i_t k = 0; k < n; k++)"; update the loop that prints
histogram_row (the block using settings.log.printf and the loop variable k) to
iterate through all n+1 bins (e.g., change the loop condition to include k == n)
so entries where row_nz == n are printed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for printing. Should not impact anything.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rg20, you're absolutely right—this is debug output under #ifdef HISTOGRAM, so it won't affect the correctness of the barrier solver or any production behavior. The missing last bin would only matter if someone were analyzing the full sparsity distribution during development, but it's not critical for the algorithm itself.

Feel free to address it in future cleanup if needed, but it's reasonable to leave it as-is given its limited scope. Thanks for clarifying!


🧠 Learnings used
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 746
File: cpp/src/dual_simplex/barrier.cu:549-552
Timestamp: 2026-01-14T00:38:38.038Z
Learning: In cpp/src/dual_simplex/barrier.cu's form_adat() method within the barrier solver, the raft::copy from d_original_A_values to device_AD.x is necessary and must not be removed. This copy restores the original matrix A values before they are scaled by the diagonal matrix D (via d_inv_diag_prime). Since D changes between iterations and the scaling mutates device_AD.x, copying the original values each iteration is required for correctness.

Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 602
File: cpp/src/linear_programming/solve.cu:732-742
Timestamp: 2025-12-04T20:09:09.264Z
Learning: In cpp/src/linear_programming/solve.cu, the barrier solver does not currently return INFEASIBLE or UNBOUNDED status. It only returns OPTIMAL, TIME_LIMIT, NUMERICAL_ISSUES, or CONCURRENT_LIMIT.

Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Validate correct initialization of variable bounds, constraint coefficients, and algorithm state before solving; ensure reset when transitioning between algorithm phases (presolve, simplex, diving, crossover)

Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results

Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication

Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 721
File: cpp/src/linear_programming/optimization_problem.cu:106-205
Timestamp: 2026-01-29T11:25:28.213Z
Learning: The function `csr_matrices_equivalent_with_permutation` in `cpp/src/linear_programming/optimization_problem.cu` is only used in CI testing with known inputs, so strict problem size validation is not required.

Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 500
File: cpp/src/dual_simplex/sparse_matrix.cpp:519-524
Timestamp: 2025-12-03T23:29:26.391Z
Learning: In cpp/src/dual_simplex/sparse_matrix.cpp, the check_matrix() function is debug/diagnostic code (wrapped in `#ifdef` CHECK_MATRIX) that intentionally prints errors without necessarily returning early. The return codes from this debug code are not actively checked; the purpose is to print all validation errors in one pass for better diagnostics.

Comment on lines 91 to 94
{CUOPT_NUM_GPUS, &mip_settings.num_gpus, 1, 2, 1},
{CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING, &mip_settings.mip_batch_pdlp_strong_branching, 0, 1, 0},
{CUOPT_PRESOLVE, reinterpret_cast<int*>(&pdlp_settings.presolver), CUOPT_PRESOLVE_DEFAULT, CUOPT_PRESOLVE_PSLP, CUOPT_PRESOLVE_DEFAULT},
{CUOPT_PRESOLVE, reinterpret_cast<int*>(&mip_settings.presolver), CUOPT_PRESOLVE_DEFAULT, CUOPT_PRESOLVE_PSLP, CUOPT_PRESOLVE_DEFAULT}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Presolve string/bool compatibility regression risk.

Now that CUOPT_PRESOLVE is int-only, set_parameter_from_string("CUOPT_PRESOLVE", "true") or a bool-based setter will throw. If you need backward compatibility, consider accepting boolean strings and mapping them to DEFAULT/OFF.

🔧 Proposed compatibility shim (outside this segment)
@@ void solver_settings_t<i_t, f_t>::set_parameter_from_string(...)
-      } else {
-        throw std::invalid_argument("Parameter " + name + " value " + value + " is not an integer");
-      }
+      } else {
+        if (param.param_name == CUOPT_PRESOLVE) {
+          bool value_bool{};
+          if (string_to_bool(value, value_bool)) {
+            *param.value_ptr = value_bool ? CUOPT_PRESOLVE_DEFAULT : CUOPT_PRESOLVE_OFF;
+            found = true;
+            if (!output) {
+              CUOPT_LOG_INFO("Setting parameter %s to %d", name.c_str(), *param.value_ptr);
+              output = true;
+            }
+            continue;
+          }
+        }
+        throw std::invalid_argument("Parameter " + name + " value " + value + " is not an integer");
+      }
🤖 Prompt for AI Agents
In `@cpp/src/math_optimization/solver_settings.cu` around lines 91 - 93, The
CUOPT_PRESOLVE parameter is now treated as an int but callers (e.g.,
set_parameter_from_string or boolean setters) may still pass "true"/"false" or
bools; update the parameter parsing/shim to accept boolean-like values and map
them to the int enums (e.g., DEFAULT/OFF) before assigning to
pdlp_settings.presolver and mip_settings.presolver, or add a branch in the
string-to-parameter logic that recognizes "true"/"false"/"1"/"0" and converts
them to the corresponding CUOPT_PRESOLVE_* constants so existing boolean-based
calls do not throw when setting CUOPT_PRESOLVE.

*/
/* clang-format on */

#include <boost/serialization/void_cast.hpp>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and read the relevant lines around line 8
cat -n cpp/src/mip/presolve/third_party_presolve.cpp | head -30

Repository: NVIDIA/cuopt

Length of output: 1265


🏁 Script executed:

# Search for any usage of void_cast or related serialization in the file
rg "void_cast" cpp/src/mip/presolve/third_party_presolve.cpp
rg "serialization" cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 145


🏁 Script executed:

# Check build files for Boost Serialization dependency
fd CMakeLists.txt | xargs rg -l "Boost.*[Ss]erialization|serialization" 2>/dev/null

Repository: NVIDIA/cuopt

Length of output: 40


🏁 Script executed:

# Check if boost/serialization headers are used anywhere in the codebase
rg "boost/serialization" --include="*.cpp" --include="*.hpp" --include="*.h" | head -20

Repository: NVIDIA/cuopt

Length of output: 497


🏁 Script executed:

# Search for actual usage of void_cast or BOOST_CLASS_EXPORT (which often use this header)
rg "BOOST_CLASS_EXPORT|BOOST_SERIALIZATION|BOOST_REGISTER" cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check the entire content to understand context and usage
wc -l cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 105


🏁 Script executed:

# Search for Boost in build configuration files more broadly
find . -name "CMakeLists.txt" -o -name "*.cmake" | xargs rg "Boost" 2>/dev/null | head -30

Repository: NVIDIA/cuopt

Length of output: 461


🏁 Script executed:

# Check if this file was recently added or modified and what the intent might be
git log --oneline -n 20 -- cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 149


🏁 Script executed:

# Look for any PSLP-related includes or serialization code around line 8
sed -n '1,50p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 2083


🏁 Script executed:

# Search the entire file for any serialization-related code
rg -i "serialize|serialize_\|void_cast" cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check if this header might be included indirectly from other headers
rg "boost/serialization" cpp/src/mip/presolve/third_party_presolve.hpp
rg "boost/serialization" cpp/src/mip/presolve/gf2_presolve.hpp

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Verify if Boost Serialization is linked in the CMakeLists for this target
find . -path ./python -prune -o -name "CMakeLists.txt" -type f -exec rg -l "third_party_presolve|serialization" {} \;

Repository: NVIDIA/cuopt

Length of output: 84


🏁 Script executed:

# Look at the actual build configuration for the mip module
cat cmake_files=$(find . -name "CMakeLists.txt" | xargs grep -l "third_party_presolve" 2>/dev/null | head -1) 2>/dev/null && echo "=== CMakeLists ===" && cat "$cmake_files"

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Search in the entire cpp directory for boost/serialization usage
rg "boost/serialization" cpp/ | head -20

Repository: NVIDIA/cuopt

Length of output: 146


🏁 Script executed:

# Read the mip CMakeLists to check if this target has Boost linked
cat cpp/src/mip/CMakeLists.txt

Repository: NVIDIA/cuopt

Length of output: 2487


🏁 Script executed:

# Double-check: search for any actual usage of serialization macros or void_cast calls
grep -n "BOOST_CLASS_EXPORT\|BOOST_SERIALIZATION\|BOOST_REGISTER\|void_cast" cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 102


🏁 Script executed:

# Check if PSLP headers might require this boost header
cat cpp/src/mip/presolve/PSLP_sol.h 2>/dev/null | head -30

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Search for PSLP_sol.h in the repo
find . -name "PSLP_sol.h" -o -name "PSLP*.h"

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check what files are included as "PSLP_sol.h" and look for boost in those paths
fd -t f "PSLP" cpp/

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check if boost/serialization is in the root CMakeLists or any build config
cat CMakeLists.txt 2>/dev/null | head -50

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Look for any dependency configuration that includes Boost Serialization
rg "target_link_libraries.*Boost" cpp/ -A 2 -B 2

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Final check: does this file compile without this header?
# Search for what might actually require serialization
rg "papilo\|PSLP" cpp/src/mip/presolve/third_party_presolve.cpp | head -10

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check if PSLP headers are external dependencies or vendored
find . -type d -name "third_party" -o -name "vendor" -o -name "external" | head -5

Repository: NVIDIA/cuopt

Length of output: 76


🏁 Script executed:

# Look at git to see if PSLP files were added or expected
git log --all --full-history -- "*PSLP*" 2>/dev/null | head -20

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check if there's a specific directory structure for third-party includes
ls -la cpp/src/mip/presolve/ | grep -i pslp

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Look at the actual include statement context more carefully
sed -n '1,20p' cpp/src/mip/presolve/third_party_presolve.cpp | cat -A

Repository: NVIDIA/cuopt

Length of output: 657


🏁 Script executed:

# Search the entire codebase for any reference to PSLP
rg "PSLP" --type-list | head -5
rg "PSLP" . 2>/dev/null | head -20

Repository: NVIDIA/cuopt

Length of output: 2449


🏁 Script executed:

# Check if Boost Serialization is configured in the main C++ build (not just Python)
cat cpp/CMakeLists.txt | grep -i boost

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check the PSLP repository to see if it uses boost serialization
# First see what's available in the included PSLP headers
grep -r "void_cast\|serialization" cpp/ 2>/dev/null | grep -v "third_party_presolve.cpp"

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Verify if the boost header is used anywhere in the build chain
find . -name "*.cpp" -o -name "*.hpp" -o -name "*.h" | xargs grep "boost/serialization" 2>/dev/null

Repository: NVIDIA/cuopt

Length of output: 148


🏁 Script executed:

# Check if this include can simply be removed - does the file compile without it?
# Look at what symbols from void_cast might be used
grep -n "void_cast\|serialize\|BOOST_" cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 102


🏁 Script executed:

# Let me check the git history to understand when and why this was added
git log -p --all -- cpp/src/mip/presolve/third_party_presolve.cpp 2>/dev/null | head -100

Repository: NVIDIA/cuopt

Length of output: 38


Remove the unused boost/serialization/void_cast.hpp include.

This header is never referenced in the file and Boost Serialization is not configured in the build. Remove it.

Remove the include
-#include <boost/serialization/void_cast.hpp>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#include <boost/serialization/void_cast.hpp>
🧰 Tools
🪛 Clang (14.0.6)

[error] 8-8: 'boost/serialization/void_cast.hpp' file not found

(clang-diagnostic-error)

🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp` at line 8, Remove the unused
include of boost/serialization/void_cast.hpp from third_party_presolve.cpp:
delete the line "#include <boost/serialization/void_cast.hpp>" (ensure no other
code in this file references Boost Serialization or void_cast so the build
remains clean).

Comment on lines 180 to 283
struct PSLPContext {
Presolver* presolver = nullptr;
Settings* settings = nullptr;
};

template <typename i_t, typename f_t>
PSLPContext build_and_run_pslp_presolver(const optimization_problem_t<i_t, f_t>& op_problem,
bool maximize)
{
PSLPContext ctx;
raft::common::nvtx::range fun_scope("Build and run PSLP presolver");

// Get problem dimensions
const i_t num_cols = op_problem.get_n_variables();
const i_t num_rows = op_problem.get_n_constraints();
const i_t nnz = op_problem.get_nnz();

// Get problem data from optimization problem
const auto& coefficients = op_problem.get_constraint_matrix_values();
const auto& offsets = op_problem.get_constraint_matrix_offsets();
const auto& variables = op_problem.get_constraint_matrix_indices();
const auto& obj_coeffs = op_problem.get_objective_coefficients();
const auto& var_lb = op_problem.get_variable_lower_bounds();
const auto& var_ub = op_problem.get_variable_upper_bounds();
const auto& bounds = op_problem.get_constraint_bounds();
const auto& row_types = op_problem.get_row_types();
const auto& constr_lb = op_problem.get_constraint_lower_bounds();
const auto& constr_ub = op_problem.get_constraint_upper_bounds();
const auto& var_types = op_problem.get_variable_types();

// Copy data to host
std::vector<f_t> h_coefficients(coefficients.size());
auto stream_view = op_problem.get_handle_ptr()->get_stream();
raft::copy(h_coefficients.data(), coefficients.data(), coefficients.size(), stream_view);
std::vector<i_t> h_offsets(offsets.size());
raft::copy(h_offsets.data(), offsets.data(), offsets.size(), stream_view);
std::vector<i_t> h_variables(variables.size());
raft::copy(h_variables.data(), variables.data(), variables.size(), stream_view);
std::vector<f_t> h_obj_coeffs(obj_coeffs.size());
raft::copy(h_obj_coeffs.data(), obj_coeffs.data(), obj_coeffs.size(), stream_view);
std::vector<f_t> h_var_lb(var_lb.size());
raft::copy(h_var_lb.data(), var_lb.data(), var_lb.size(), stream_view);
std::vector<f_t> h_var_ub(var_ub.size());
raft::copy(h_var_ub.data(), var_ub.data(), var_ub.size(), stream_view);
std::vector<f_t> h_bounds(bounds.size());
raft::copy(h_bounds.data(), bounds.data(), bounds.size(), stream_view);
std::vector<char> h_row_types(row_types.size());
raft::copy(h_row_types.data(), row_types.data(), row_types.size(), stream_view);
std::vector<f_t> h_constr_lb(constr_lb.size());
raft::copy(h_constr_lb.data(), constr_lb.data(), constr_lb.size(), stream_view);
std::vector<f_t> h_constr_ub(constr_ub.size());
raft::copy(h_constr_ub.data(), constr_ub.data(), constr_ub.size(), stream_view);
std::vector<var_t> h_var_types(var_types.size());
raft::copy(h_var_types.data(), var_types.data(), var_types.size(), stream_view);

if (maximize) {
for (size_t i = 0; i < h_obj_coeffs.size(); ++i) {
h_obj_coeffs[i] = -h_obj_coeffs[i];
}
}

auto constr_bounds_empty = h_constr_lb.empty() && h_constr_ub.empty();
if (constr_bounds_empty) {
for (size_t i = 0; i < h_row_types.size(); ++i) {
if (h_row_types[i] == 'L') {
h_constr_lb.push_back(-std::numeric_limits<f_t>::infinity());
h_constr_ub.push_back(h_bounds[i]);
} else if (h_row_types[i] == 'G') {
h_constr_lb.push_back(h_bounds[i]);
h_constr_ub.push_back(std::numeric_limits<f_t>::infinity());
} else if (h_row_types[i] == 'E') {
h_constr_lb.push_back(h_bounds[i]);
h_constr_ub.push_back(h_bounds[i]);
}
}
}

// Call PSLP presolver
ctx.settings = default_settings();
auto start_time = std::chrono::high_resolution_clock::now();
ctx.presolver = new_presolver(h_coefficients.data(),
h_variables.data(),
h_offsets.data(),
num_rows,
num_cols,
nnz,
h_constr_lb.data(),
h_constr_ub.data(),
h_var_lb.data(),
h_var_ub.data(),
h_obj_coeffs.data(),
ctx.settings);
assert(ctx.presolver != nullptr && "Presolver initialization failed");
PresolveStatus status = run_presolver(ctx.presolver);
auto end_time = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time);
CUOPT_LOG_INFO("PSLP presolver time: %d milliseconds", duration.count());
CUOPT_LOG_INFO("PSLP Presolved problem: %d constraints, %d variables, %d non-zeros",
ctx.presolver->stats->n_rows_reduced,
ctx.presolver->stats->n_cols_reduced,
ctx.presolver->stats->nnz_reduced);

return ctx;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, explore the codebase structure and locate relevant files
find . -type f -name "*PSLP*" -o -name "*pslp*" | head -20

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Search for PresolveStatus enum definition and usage
rg -n "PresolveStatus" --type-list | head -5

Repository: NVIDIA/cuopt

Length of output: 445


🏁 Script executed:

# Search for cuopt_expects macro/function to understand error handling
rg -n "cuopt_expects" -A 3 -B 1 | head -40

Repository: NVIDIA/cuopt

Length of output: 3193


🏁 Script executed:

# Check the actual third_party_presolve.cpp file for context
rg -n "run_presolver" cpp/src/mip/presolve/third_party_presolve.cpp -A 2 -B 2

Repository: NVIDIA/cuopt

Length of output: 416


🏁 Script executed:

# Look at how presolver/settings are handled on exit
rg -n "delete_presolver\|delete_settings\|presolver\|settings" cpp/src/mip/presolve/third_party_presolve.cpp | grep -E "(delete|free|destructor)" | head -20

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check callers of build_and_run_pslp_presolver to see error handling
rg -n "build_and_run_pslp_presolver" -A 5 -B 2

Repository: NVIDIA/cuopt

Length of output: 1469


🏁 Script executed:

# Read the full function to see what happens after run_presolver
sed -n '186,300p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 5598


🏁 Script executed:

# Search for PresolveStatus enum definition
rg -n "enum.*PresolveStatus\|typedef.*PresolveStatus" -A 5

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check what happens to the presolver if initialization fails
rg -n "new_presolver\|delete_presolver" -B 3 -A 3

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Look for PSLPContext struct definition and any RAII cleanup
rg -n "struct PSLPContext" -A 20

Repository: NVIDIA/cuopt

Length of output: 1926


🏁 Script executed:

# Check if status is checked after run_presolver call
sed -n '273,290p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 940


🏁 Script executed:

# Find PresolveStatus definition
rg -n "enum.*PresolveStatus|typedef.*PresolveStatus" -A 10

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check if status is used anywhere after being captured
sed -n '273,285p' cpp/src/mip/presolve/third_party_presolve.cpp | cat -n

Repository: NVIDIA/cuopt

Length of output: 749


🏁 Script executed:

# Look for how the returned ctx is used in the caller
sed -n '530,550p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 843


🏁 Script executed:

# Check if there's any cleanup/deallocation for presolver
rg -n "delete_presolver|free.*presolver" -B 2 -A 2

Repository: NVIDIA/cuopt

Length of output: 500


🏁 Script executed:

# Search for what constitutes a failure in PSLP
rg -n "run_presolver" -A 1 -B 1 | head -30

Repository: NVIDIA/cuopt

Length of output: 403


🏁 Script executed:

# Check if build_optimization_problem_from_pslp handles null presolver
sed -n '300,320p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 914


🏁 Script executed:

# Find where cuopt_expects is defined
rg -n "define.*cuopt_expects|^cuopt_expects" -A 5 | head -40

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Check how exceptions or errors are handled in similar PSLP-related code
rg -n "throw|try|catch" cpp/src/mip/presolve/third_party_presolve.cpp | head -10

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Verify what PresolveStatus values indicate failure
find . -type f \( -name "*.h" -o -name "*.hpp" \) -path "*PSLP*" -o -path "*pslp*" | xargs grep -l "PresolveStatus" 2>/dev/null | head -5

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Look for how other PSLP functions handle errors
rg -n "new_presolver|run_presolver|free_presolver" -B 2 -A 2 | grep -A 5 "new_presolver\|run_presolver" | head -40

Repository: NVIDIA/cuopt

Length of output: 1123


🏁 Script executed:

# Check caller to see if nullopt or exception is expected on error
sed -n '520,545p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 1077


🏁 Script executed:

# Check what cuopt_expects does when condition fails
rg -n "define.*cuopt_expects" -A 10 | head -50

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Look for the cuopt_expects macro definition in headers
find . -type f -name "*.h" -o -name "*.hpp" | xargs grep -l "cuopt_expects" | head -5

Repository: NVIDIA/cuopt

Length of output: 273


🏁 Script executed:

# Search for error handling macro/function definitions
rg -n "cuopt_expects" -B 5 -A 5 cpp/src/routing/order_info.cu | head -20

Repository: NVIDIA/cuopt

Length of output: 1016


🏁 Script executed:

# Check if there's exception handling or error returns in similar presolver code
rg -n "try|catch|throw" cpp/src/mip/presolve/ | head -20

Repository: NVIDIA/cuopt

Length of output: 2136


🏁 Script executed:

# Verify that status variable is truly never used
rg -n "status" cpp/src/mip/presolve/third_party_presolve.cpp | grep -i "presolve\|pslp"

Repository: NVIDIA/cuopt

Length of output: 739


🏁 Script executed:

# Read the error.hpp file to see cuopt_expects definition
cat cpp/include/cuopt/error.hpp | head -100

Repository: NVIDIA/cuopt

Length of output: 2826


🏁 Script executed:

# Check the check_presolve_status function and compare with PSLP handling
sed -n '415,445p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 1197


🏁 Script executed:

# Verify that status is NOT checked in the PSLP path
sed -n '273,295p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 1108


🏁 Script executed:

# Check full destructor and cleanup logic
sed -n '690,705p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 568


🏁 Script executed:

# Verify if there's any try-catch around build_and_run_pslp_presolver call
sed -n '525,545p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 841


🏁 Script executed:

# Check if apply_pslp catches exceptions or checks for null
sed -n '525,550p' cpp/src/mip/presolve/third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 989


🏁 Script executed:

# See what PresolveStatus values can indicate failure
rg -n "PresolveStatus::" -B 1 -A 1 | head -40

Repository: NVIDIA/cuopt

Length of output: 3143


🏁 Script executed:

# Check if PSLPContext has RAII cleanup (destructor)
rg -n "~PSLPContext" -A 5

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Verify the full error.hpp to see complete cuopt_expects implementation
sed -n '90,110p' cpp/include/cuopt/error.hpp

Repository: NVIDIA/cuopt

Length of output: 801


🏁 Script executed:

# Check if there's any try-catch that wraps build_and_run_pslp_presolver
rg -n "try\|catch" cpp/src/mip/presolve/third_party_presolve.cpp | head -10

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

# Verify what happens when an exception is thrown in apply_pslp
rg -n "apply_pslp" -B 5 -A 15 | head -50

Repository: NVIDIA/cuopt

Length of output: 4327


Handle PSLP init/presolve failures explicitly.

assert is compiled out in release builds, and run_presolver's status is ignored. A failed initialization or non‑success status can lead to null derefs or using an invalid reduced problem (e.g., in build_optimization_problem_from_pslp which immediately accesses pslp_presolver->reduced_prob). This diverges from the papilo presolver path which explicitly checks status and returns std::nullopt on failure (lines 589–591).

Replace the assert with a runtime check, propagate non‑success PSLP statuses (like kInfeasible, kUnbndOrInfeas), and return std::nullopt or throw appropriately.

🛠️ Safer init and status check
-  assert(ctx.presolver != nullptr && "Presolver initialization failed");
+  cuopt_expects(ctx.presolver != nullptr,
+                error_type_t::RuntimeError,
+                "PSLP presolver initialization failed");
   PresolveStatus status = run_presolver(ctx.presolver);
+  if (status == PresolveStatus::kInfeasible ||
+      status == PresolveStatus::kUnbndOrInfeas) {
+    return std::nullopt;
+  }
🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp` around lines 180 - 283,
Replace the assert and ignored presolve result in build_and_run_pslp_presolver
with explicit runtime checks: change the function to return
std::optional<PSLPContext>, after calling new_presolver(...) check if
ctx.presolver is nullptr and return std::nullopt (cleaning up settings if
needed); then call run_presolver(ctx.presolver) and if the returned
PresolveStatus is not success (explicitly handle kInfeasible, kUnbndOrInfeas and
any non-success), free/delete ctx.presolver and return std::nullopt; only return
a populated PSLPContext when new_presolver succeeded and run_presolver returned
success, ensuring callers (e.g., code reading ctx.presolver->reduced_prob) won't
dereference an invalid presolver.

Comment on lines 528 to 539
template <typename i_t, typename f_t>
std::optional<third_party_presolve_result_t<i_t, f_t>> third_party_presolve_t<i_t, f_t>::apply_pslp(
optimization_problem_t<i_t, f_t> const& op_problem)
{
auto ctx = build_and_run_pslp_presolver(op_problem, maximize_);
pslp_presolver_ = ctx.presolver;
pslp_stgs_ = ctx.settings;

auto opt_problem = build_optimization_problem_from_pslp<i_t, f_t>(
pslp_presolver_, op_problem.get_handle_ptr(), maximize_);

return std::make_optional(third_party_presolve_result_t<i_t, f_t>{opt_problem, {}});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd cpp/src/mip/presolve && wc -l third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 84


🏁 Script executed:

# First, let's examine the class definition and member declarations
cd cpp/src/mip/presolve && grep -n "pslp_presolver_\|pslp_stgs_" third_party_presolve.cpp | head -20

Repository: NVIDIA/cuopt

Length of output: 754


🏁 Script executed:

# Find the header file to understand the types
fd third_party_presolve.hpp

Repository: NVIDIA/cuopt

Length of output: 101


🏁 Script executed:

# Read the relevant section of the implementation file
cd cpp/src/mip/presolve && sed -n '520,550p' third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 1225


🏁 Script executed:

# Look at the header file to see member types
cd cpp/src/mip/presolve && cat -n third_party_presolve.hpp | grep -A 50 "class third_party_presolve_t"

Repository: NVIDIA/cuopt

Length of output: 1898


🏁 Script executed:

# Check lines 690-710 to understand the context of the cleanup code
cd cpp/src/mip/presolve && sed -n '690,710p' third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 678


🏁 Script executed:

# Search for the destructor definition
cd cpp/src/mip/presolve && grep -n "~third_party_presolve_t\|~third_party_presolve" third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 119


🏁 Script executed:

# Check how apply_pslp is called from apply()
cd cpp/src/mip/presolve && sed -n '542,620p' third_party_presolve.cpp

Repository: NVIDIA/cuopt

Length of output: 3569


🏁 Script executed:

# Search for usage patterns of third_party_presolve_t in the codebase
rg "third_party_presolve_t" --type cpp -A 3 -B 3 | head -60

Repository: NVIDIA/cuopt

Length of output: 4565


Free prior PSLP allocations before reassigning on repeated apply() calls.

apply_pslp() unconditionally overwrites pslp_presolver_ and pslp_stgs_ member pointers without freeing previous allocations. Since third_party_presolve_t is reused across multiple apply() calls, repeated calls with presolver==PSLP leak the prior PSLP resources. Free existing allocations before reassignment.

Fix: Free prior allocations
+  if (pslp_presolver_ != nullptr) {
+    free_presolver(pslp_presolver_);
+  }
+  if (pslp_stgs_ != nullptr) {
+    free_settings(pslp_stgs_);
+  }
   auto ctx        = build_and_run_pslp_presolver(op_problem, maximize_);
   pslp_presolver_ = ctx.presolver;
   pslp_stgs_      = ctx.settings;
🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp` around lines 528 - 539, Before
overwriting pslp_presolver_ and pslp_stgs_ in
third_party_presolve_t<i_t,f_t>::apply_pslp, check if they are non-null and
release their prior allocations (call the appropriate PSLP destructor/free
function or delete depending on how PSLP objects are managed), then set them to
the new ctx.presolver and ctx.settings; specifically, free the existing
pslp_presolver_ and pslp_stgs_ members before assigning new values to avoid
leaking prior PSLP resources when apply_pslp is called repeatedly.

Comment on lines 58 to 76
~third_party_presolve_t();

private:
std::optional<third_party_presolve_result_t<i_t, f_t>> apply_pslp(
optimization_problem_t<i_t, f_t> const& op_problem);

void undo_pslp(rmm::device_uvector<f_t>& primal_solution,
rmm::device_uvector<f_t>& dual_solution,
rmm::device_uvector<f_t>& reduced_costs,
rmm::cuda_stream_view stream_view);

bool maximize_ = false;
cuopt::linear_programming::presolver_t presolver_ = cuopt::linear_programming::presolver_t::PSLP;
// PSLP settings
Settings* pslp_stgs_{nullptr};
Presolver* pslp_presolver_{nullptr};

// Papilo postsolve storage
papilo::PostsolveStorage<double> papilo_post_solve_storage_;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent double-free: delete copy/move for presolver-owning class.

pslp_stgs_ and pslp_presolver_ are raw-owned and freed in the destructor, but the class is still copyable. Any accidental copy will double-free. Please delete copy/move or wrap these in RAII types.

🛠️ Proposed fix (delete copy/move)
 class third_party_presolve_t {
  public:
   third_party_presolve_t() = default;
+  third_party_presolve_t(const third_party_presolve_t&) = delete;
+  third_party_presolve_t& operator=(const third_party_presolve_t&) = delete;
+  third_party_presolve_t(third_party_presolve_t&&) = delete;
+  third_party_presolve_t& operator=(third_party_presolve_t&&) = delete;
🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/third_party_presolve.hpp` around lines 58 - 76, The
class currently owning raw pointers pslp_stgs_ and pslp_presolver_ can be
accidentally copied and cause double-free; update the third_party_presolve_t
class to be non-copyable/non-movable by explicitly deleting the copy
constructor, copy assignment operator, move constructor, and move assignment
operator (i.e., add declarations like third_party_presolve_t(const
third_party_presolve_t&) = delete; and similar for operator= and move variants)
so instances that own pslp_stgs_ and pslp_presolver_ cannot be copied or moved;
alternatively, replace those raw pointers with RAII types (unique_ptr) if you
prefer move-only semantics, but the quick fix is to delete copy/move for
third_party_presolve_t.

Comment on lines +150 to +148
mip_solver_settings_t<i_t, f_t> settings(settings_const);
if (settings.presolver == presolver_t::Default || settings.presolver == presolver_t::PSLP) {
settings.presolver = presolver_t::Papilo;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid silently overriding PSLP to Papilo.

If PSLP isn’t supported for MIP yet, consider logging a warning (or returning an error) instead of silently replacing the user’s choice. That makes configuration intent explicit and avoids surprises.

💡 Suggested warning + explicit fallback
-    if (settings.presolver == presolver_t::Default || settings.presolver == presolver_t::PSLP) {
-      settings.presolver = presolver_t::Papilo;
-    }
+    if (settings.presolver == presolver_t::Default) {
+      settings.presolver = presolver_t::Papilo;
+    } else if (settings.presolver == presolver_t::PSLP) {
+      CUOPT_LOG_WARN("PSLP presolver is not supported for MIP; falling back to Papilo.");
+      settings.presolver = presolver_t::Papilo;
+    }
🤖 Prompt for AI Agents
In `@cpp/src/mip/solve.cu` around lines 150 - 153, The code in the
mip_solver_settings_t constructor block currently silently replaces
settings.presolver when it's presolver_t::Default or presolver_t::PSLP with
presolver_t::Papilo; update this to either emit a clear warning or return an
error instead of silently overriding: detect when settings.presolver ==
presolver_t::PSLP (and optionally when == presolver_t::Default), log a warning
via the existing logging mechanism (or propagate an error/exception) that "PSLP
not supported for MIP, falling back to Papilo" (or require user change), and
only perform the assignment to presolver_t::Papilo after logging/handling so the
change is explicit; refer to mip_solver_settings_t, settings.presolver and
presolver_t::PSLP / presolver_t::Papilo when locating and modifying the code.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/tests/linear_programming/pdlp_test.cu (1)

1615-1615: Inconsistent API usage: presolve boolean instead of presolver enum.

This line still uses the old settings.presolve = false pattern while the rest of the codebase has migrated to settings.presolver = presolver_t::None. This appears to be a missed migration.

🛠️ Proposed fix
-  solver_settings.presolve         = false;
+  solver_settings.presolver        = presolver_t::None;
🤖 Fix all issues with AI agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Around line 682-691: undo_pslp() is resizing and copying solution outputs
using the reduced problem dimensions (pslp_presolver_->reduced_prob->m/n) which
truncates postsolved outputs; instead, capture the original problem dimensions
in apply_pslp() (e.g., store original m and n on the pslp_presolver_ object or a
dedicated members like orig_m/orig_n when you create reduced_prob) and then use
those stored original dimensions in undo_pslp() when resizing primal_solution,
dual_solution, reduced_costs and when copying from pslp_presolver_->sol->x/y/z
so the postsolve-expanded solutions fit the original problem size.

In `@cpp/tests/CMakeLists.txt`:
- Line 45: Remove the hard-coded absolute include path
"/home/nfs/rgandham/git-repos/PSLP/include/PSLP" from the
target_include_directories call in cpp/tests/CMakeLists.txt; the test target
already links to cuopt (which has PSLP as a PRIVATE dependency via
FetchContent), so simply delete that literal entry from
target_include_directories to rely on transitive includes from the cuopt target.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/linear_programming/solve.cu (1)

1146-1162: Presolve time log is now misleading for PSLP.
The message still says “Papilo presolve time” even when PSLP is selected, which makes logs confusing.

🔧 Proposed change
-      CUOPT_LOG_INFO("Papilo presolve time: %f", presolve_time);
+      CUOPT_LOG_INFO("Presolve time: %f", presolve_time);
🧹 Nitpick comments (1)
cpp/tests/linear_programming/pdlp_test.cu (1)

214-237: Add PSLP (and optionally Default) coverage in the presolver loop.

This test is the most visible place to exercise the new presolver options; currently only Papilo/None are covered. Adding PSLP here would validate the new integration path.

🔧 Suggested test expansion
-      for (auto [presolver, epsilon] :
-           {std::pair{presolver_t::Papilo, 1e-1}, std::pair{presolver_t::None, 1e-6}}) {
+      for (auto [presolver, epsilon] :
+           {std::pair{presolver_t::Papilo, 1e-1},
+            std::pair{presolver_t::PSLP, 1e-1},
+            std::pair{presolver_t::None, 1e-6}}) {

@nguidotti
Copy link
Contributor

I have a question: the PSLP will be only applicable for LP or it will be also be used for MIP as well?

if (!largest_is_sum_zero && color_sums.count(0.0) > 0) {
// sum=0 exists but is NOT the largest - need to create a new color for it
// These are vertices in the color that don't touch the refining color
std::unordered_set<i_t> refined_set(vertices_to_refine_by_color[color].begin(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to create a refined_set to know if a vertex was not touched by the refining color.

std::unordered_set<i_t> refined_set(vertices_to_refine_by_color[color].begin(),
vertices_to_refine_by_color[color].end());
for (i_t v : colors[color].vertices) {
if (refined_set.find(v) == refined_set.end()) { sum_zero_vertices.push_back(v); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this condition should be:

if (vertex_to_sum[v] == 0.0) { sum_zero_vertices.push_back(v); } 

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we don't need to have a separate sum_zero_vertices array at all. We should just append these to color_sums

if (vertex_to_sum[v] == 0.0) { color_sums[0.0].push_back(v) } 

// These are vertices in the color that don't touch the refining color
std::unordered_set<i_t> refined_set(vertices_to_refine_by_color[color].begin(),
vertices_to_refine_by_color[color].end());
for (i_t v : colors[color].vertices) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to be check this. This violated the property that we only touch vertices that are touched by the refining color. That property is required by the asymptotic analysis.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also quite expensive.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is not as bad. I guess we come close to touching all vertices when we erase the vertices in the color we are splitting.


// Smaller parts get new colors and are added to the stack
// Note: for sum=0, 'vertices' is empty, so use sum_zero_vertices instead
const std::vector<i_t>& verts_to_use = is_sum_zero ? sum_zero_vertices : vertices;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line. It is not needed if we add sum==0 vertices directly to color_sums

// Note: for sum=0, 'vertices' is empty, so use sum_zero_vertices instead
const std::vector<i_t>& verts_to_use = is_sum_zero ? sum_zero_vertices : vertices;

if (!verts_to_use.empty()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think vertices can ever be empty. So this check should not be needed.

const std::vector<i_t>& verts_to_use = is_sum_zero ? sum_zero_vertices : vertices;

if (!verts_to_use.empty()) {
colors.emplace_back(side_being_split, kActive, total_colors_seen, verts_to_use);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

verts_to_use -> vertices

color_in_stack[total_colors_seen] = 1;

for (i_t v : vertices) {
for (i_t v : verts_to_use) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this change

}
// Also remove sum=0 vertices if they were moved (sum=0 wasn't the largest)
if (!largest_is_sum_zero) {
for (i_t v : sum_zero_vertices) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with for (i_t v: color_sums[0.0])


// Collect sum=0 vertices: those in colors[color].vertices but NOT in vertices_to_refine_by_color
// (vertices that have no neighbors in the refining color)
std::vector<i_t> sum_zero_vertices;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sum_zero_vertices vector is not needed.

i_t size = vertices.size();
bool is_sum_zero = (sum == 0.0);

if (is_sum_zero) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can delete this if (is_sum_zero) check to correct the size now. Because we are computing all the vertices in the sum=0 set now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or at least we only need to do it if the largest is sum zero.

problem = detail::problem_t<i_t, f_t>(result->reduced_problem);
presolve_time = lp_timer.elapsed_time();
CUOPT_LOG_INFO("Papilo presolve time: %f", presolve_time);
CUOPT_LOG_INFO("External presolve time: %f", presolve_time);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would call out the name of the presolve we are using in the logs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/linear_programming/solve.cu (1)

1100-1141: ⚠️ Potential issue | 🟠 Major

Default presolver is applied twice; size-based logic is now unreachable.
settings.presolver is set to PSLP at Line 1101, so the later size check (Line 1133 onward) never runs. This changes behavior for smaller LPs and makes the threshold logic dead code. Please pick one policy and remove the other.

Proposed fix (keep size-based default)
-    // For LP, if presolver is default, use PSLP
-    if (settings.presolver == presolver_t::Default) { settings.presolver = presolver_t::PSLP; }
+    // Defer default presolver selection to size-based logic below

@rg20
Copy link
Contributor Author

rg20 commented Feb 5, 2026

/ok to test 177afe3

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Around line 724-732: The PSLP presolver path isn't handled: modify
uncrush_primal_solution to branch on presolver_ (check presolver_ == PSLP) and
route to the correct postsolve storage and routine instead of always using
papilo_post_solve_storage_; specifically, when presolver_ == PSLP call the PSLP
postsolve API (use the PSLP equivalent storage, e.g. pslp_post_solve_storage_ or
the project’s PSLP postsolve function) and pass its storage into the postsolve
undo call (or call the PSLP undo wrapper) so you don't use
papilo_post_solve_storage_ for PSLP; also add a guard to ensure the chosen
storage is initialized and return/propagate an error if it is not.
🧹 Nitpick comments (2)
cpp/include/cuopt/linear_programming/utilities/internals.hpp (1)

115-131: Clarify the documentation note about default presolver behavior.

The note "@note Default presolver is None" may be misleading since:

  1. The Default enum value (-1) is distinct from None (0)
  2. The actual default behavior varies by problem type (LP uses PSLP, MIP uses Papilo based on the code in solve_mip)

Consider updating the documentation to clarify that Default means "use the solver's default choice" which depends on the problem type.

📝 Suggested documentation improvement
 /**
  * `@brief` Enum representing the different presolvers that can be used to solve the
  * linear programming problem.
  *
  * Default: Use the default presolver.
  * None: No presolver.
  * Papilo: Use the Papilo presolver.
  * PSLP: Use the PSLP presolver.
  *
- * `@note` Default presolver is None.
+ * `@note` When Default is selected, the solver chooses the appropriate presolver
+ * based on problem type (PSLP for LP, Papilo for MIP).
  */
cpp/src/mip/presolve/third_party_presolve.cpp (1)

210-255: Consider extracting duplicated host data copying into a shared helper.

The host data copying logic (lines 210-234) and constraint bounds inference (lines 241-255) are nearly identical to lines 51-96 in build_papilo_problem. This ~45 lines of duplication increases maintenance burden.

💡 Suggested refactor

Extract to a helper struct that holds the copied host data:

template <typename i_t, typename f_t>
struct HostProblemData {
  std::vector<f_t> coefficients, obj_coeffs, var_lb, var_ub, bounds, constr_lb, constr_ub;
  std::vector<i_t> offsets, variables;
  std::vector<char> row_types;
  std::vector<var_t> var_types;
  
  static HostProblemData from_optimization_problem(
    const optimization_problem_t<i_t, f_t>& op_problem, bool maximize);
};

Both build_papilo_problem and build_and_run_pslp_presolver can then use this helper.

Based on learnings: "Refactor code duplication in solver components (3+ occurrences) into shared utilities."

Comment on lines +724 to +732
// FIXME:: handle PSLP presolver
papilo::Solution<f_t> reduced_sol(reduced_primal);
papilo::Solution<f_t> full_sol;
papilo::Message Msg{};
Msg.setVerbosityLevel(papilo::VerbosityLevel::kQuiet);
papilo::Postsolve<f_t> post_solver{Msg, post_solve_storage_.getNum()};
papilo::Postsolve<f_t> post_solver{Msg, papilo_post_solve_storage_.getNum()};

bool is_optimal = false;
auto status = post_solver.undo(reduced_sol, full_sol, post_solve_storage_, is_optimal);
auto status = post_solver.undo(reduced_sol, full_sol, papilo_post_solve_storage_, is_optimal);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

FIXME: uncrush_primal_solution does not handle PSLP presolver path.

When presolver_ == PSLP, this function still uses papilo_post_solve_storage_ which may be uninitialized or contain stale data, leading to incorrect results or undefined behavior.

Would you like me to generate an implementation that routes to PSLP postsolve when appropriate, or open an issue to track this?

🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/third_party_presolve.cpp` around lines 724 - 732, The
PSLP presolver path isn't handled: modify uncrush_primal_solution to branch on
presolver_ (check presolver_ == PSLP) and route to the correct postsolve storage
and routine instead of always using papilo_post_solve_storage_; specifically,
when presolver_ == PSLP call the PSLP postsolve API (use the PSLP equivalent
storage, e.g. pslp_post_solve_storage_ or the project’s PSLP postsolve function)
and pass its storage into the postsolve undo call (or call the PSLP undo
wrapper) so you don't use papilo_post_solve_storage_ for PSLP; also add a guard
to ensure the chosen storage is initialized and return/propagate an error if it
is not.

@rg20
Copy link
Contributor Author

rg20 commented Feb 5, 2026

/ok to test 12cfdaf

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/folding.cpp (1)

248-263: ⚠️ Potential issue | 🟠 Major

Unsigned underflow bug: negative check will never trigger.

The subtraction of two size_t values produces an unsigned result. If vertices_to_refine_by_color[color].size() > colors[color].vertices.size(), the result wraps to a large positive value rather than becoming negative. The subsequent check if (remaining_size < 0) will never be true.

🐛 Proposed fix to detect the error condition correctly
-  if (vertices_to_refine_by_color[color].size() != colors[color].vertices.size()) {
-    const i_t remaining_size =
-      colors[color].vertices.size() - vertices_to_refine_by_color[color].size();
-    if (remaining_size < 0) {
+  if (vertices_to_refine_by_color[color].size() != colors[color].vertices.size()) {
+    if (vertices_to_refine_by_color[color].size() > colors[color].vertices.size()) {
       printf("Negative remaining size %d\n", remaining_size);
       // ... error handling ...
       exit(1);
     }
+    const i_t remaining_size =
+      static_cast<i_t>(colors[color].vertices.size() - vertices_to_refine_by_color[color].size());
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/folding.cpp`:
- Line 1532: Move the FILE* fid declaration into the conditional compilation
scope where it is used: remove the unconditional "FILE* fid;" and declare "FILE*
fid" inside the matching "#ifdef DEBUG ... `#endif`" block(s) in folding.cpp so
that fid only exists when DEBUG is defined (this avoids unused-variable
warnings); ensure all references to fid remain inside those DEBUG blocks or are
guarded accordingly.
- Around line 227-228: The maps color_sums and sum_to_sizes use f_t (floating)
as keys which is unstable; replace that with a stable key by quantizing or
tolerance-grouping sums: add a helper like quantize_sum(f_t val, f_t eps) that
converts to an integer bucket (e.g., round(val/eps) or multiply by scale) and
change map types from std::map<f_t,...> to std::map<std::int64_t,...> (or
introduce a SumKey alias) and call quantize_sum everywhere you currently use f_t
as a key (places that insert/look up in color_sums and sum_to_sizes in
folding.cpp and any functions using those maps); alternatively implement a
custom comparator for a wrapper key that treats keys equal within eps, and
update all insert/lookup code to use the new key conversion so near-equal
floating sums are grouped consistently.
🧹 Nitpick comments (3)
cpp/src/dual_simplex/folding.cpp (3)

1444-1455: Variable shadowing reduces readability.

At line 1451, ref_sum is declared again, shadowing the ref_sum from the structured binding at line 1444. While the logic appears correct (looking up the reference row's sum for column colors that exist in vertex u but may not in ref_row), the shadowing makes the code harder to follow.

♻️ Suggested rename for clarity
       for (auto& [cc, u_sum] : row_col_color_sum[u]) {
-        f_t ref_sum = row_col_color_sum[ref_row][cc];
+        f_t ref_row_sum = row_col_color_sum[ref_row][cc];
-        f_t diff    = std::abs(ref_sum - u_sum);
+        f_t diff        = std::abs(ref_row_sum - u_sum);

1217-1219: Remove commented-out debug macros.

Multiple instances of commented-out #define DEBUG and #undef DEBUG are scattered throughout the file (lines 1217-1219, 1368-1369, 1389, 1519, 1533, 1628-1629). These development artifacts clutter the code and could cause confusion.

Consider removing these or consolidating debug control into a single build-time configuration.


211-217: Replace exit(1) with proper error handling.

Using exit(1) terminates the entire process without cleanup, preventing callers from handling errors gracefully. This pattern appears in multiple places (lines 216, 262, 287). Consider returning an error status or using the existing coloring_status_t::COLORING_FAILED pattern used elsewhere in the code.

♻️ Suggested pattern for one instance
     if (max_sum_by_color[color] < min_sum_by_color[color]) {
-      printf("Color %d has max sum %e < min sum %e\n",
-             color,
-             max_sum_by_color[color],
-             min_sum_by_color[color]);
-      exit(1);
+      // Log error and signal failure to caller
+      settings.log.printf("Folding: Color %d has max sum %e < min sum %e\n",
+                          color,
+                          max_sum_by_color[color],
+                          min_sum_by_color[color]);
+      // Return early or set error flag for caller to handle
     }

Comment on lines +227 to +228
std::map<f_t, std::vector<i_t>>& color_sums,
std::map<f_t, i_t>& sum_to_sizes,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Using floating-point type as map key may cause numerical instability.

Using f_t (floating-point) as the key type in std::map<f_t, ...> is problematic because floating-point equality comparisons are unreliable due to precision issues. Two sums that should be mathematically equal might differ by a small epsilon and be treated as different keys, potentially causing incorrect color partitioning.

Consider using a tolerance-based grouping approach or quantizing the floating-point values before using them as keys.

🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/folding.cpp` around lines 227 - 228, The maps color_sums
and sum_to_sizes use f_t (floating) as keys which is unstable; replace that with
a stable key by quantizing or tolerance-grouping sums: add a helper like
quantize_sum(f_t val, f_t eps) that converts to an integer bucket (e.g.,
round(val/eps) or multiply by scale) and change map types from std::map<f_t,...>
to std::map<std::int64_t,...> (or introduce a SumKey alias) and call
quantize_sum everywhere you currently use f_t as a key (places that insert/look
up in color_sums and sum_to_sizes in folding.cpp and any functions using those
maps); alternatively implement a custom comparator for a wrapper key that treats
keys equal within eps, and update all insert/lookup code to use the new key
conversion so near-equal floating sums are grouped consistently.


settings.log.debug("Folding: partial A_tilde nz %d predicted %d\n", nnz, A_nnz + 2 * nz_ub);

FILE* fid;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Move FILE* fid declaration inside #ifdef DEBUG block.

The variable fid is declared unconditionally but only used within #ifdef DEBUG blocks. This will cause an "unused variable" compiler warning when DEBUG is not defined.

🐛 Proposed fix
-  FILE* fid;
-// `#define` DEBUG  // Uncomment to enable expensive XA=AY verification
 `#ifdef` DEBUG
+  FILE* fid;
 `#ifdef` BUILD_MATRIX_X_AND_Y
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/folding.cpp` at line 1532, Move the FILE* fid
declaration into the conditional compilation scope where it is used: remove the
unconditional "FILE* fid;" and declare "FILE* fid" inside the matching "#ifdef
DEBUG ... `#endif`" block(s) in folding.cpp so that fid only exists when DEBUG is
defined (this avoids unused-variable warnings); ensure all references to fid
remain inside those DEBUG blocks or are guarded accordingly.

@rg20
Copy link
Contributor Author

rg20 commented Feb 5, 2026

/ok to test ad32471

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants