Skip to content

Conversation

@chris-maes
Copy link
Contributor

@chris-maes chris-maes commented Jan 29, 2026

This PR adds cuts to the MIP solver. This includes the following:

  1. Add constraints in the form C*x <= d to an LP that has already been solved to optimality (and has basis information).
    • The constraints must be violated at the current relaxation solution x^star. That is, C*x^star > d.
    • The constraint matrix, rhs, basis, and basis factorization, are all updated to include the additional constraints.
    • Dual simplex is started in phase 2 from a dual feasible solution.
  2. Remove constraints from an LP that has already been solved to optimality.
    • The constraints must have slacks in the basis
    • The basis is refactored from scratch
  3. Add cut pass loop after solving the root relaxation
  4. Add a cut pool to store cuts and select cuts
    • We currently score cuts based on distance and orthogonality.
  5. Add Mixed Integer Gomory Cuts
    • These are computed via a MIR cut on a row of the simplex tableau
  6. Add Mixed Integer Rounding (MIR) Cuts
    • These are constructed by aggregating rows of the constraint matrix.
  7. Add Strong Chvatal-Gomory Cuts
    • These are constructed from a row of the tableau matrix and from rows of the constraint matrix.
  8. Fixes to Handling of Steepest Edge Norms in Dual Simplex
    • Ensure that all basic variables have a positive steepest edge norms
  9. Reduced Costs Fixing at Root Node
  • These are applied after each cut pass and after strong branching, if a heuristic solution is available.
  1. Fix issues in Crossover when solving the dual problem
  • We were not correctly populating slack variables when solving the dual. This issue appeared on graph20-80-1rand
  1. Fix issue in Crossover when basis became rank-deficient in dual push
  2. Fix issues across the code with handling and propagating concurrent halt.
  3. New solver options: mip-cut-passes, mip-mixed-integer-gomory-cuts, mip-mir-cuts, mip-strong-chvatal-gomory-cuts, mip-knapsack-cuts, mip-cut-change-threshold, mip-cut-min-orthogonality.

Closes #698, #205

Results from a GH200 A/B test with 64 threads and a 300 second time limit. Further runs needed with larger time limit. Further work is needed to get the full benefit of cuts.

A: cuts PR with --mip-cut-passes=10
B: cuts PR with --mip-cut-passes=0

Geomean MIP GAP A / (B = baseline): 0.97
Geomean Time to Optimal A/B: 0.96
A optimal 45
B optimal 37
A problems with feasible solutions 225
B problems with feasible solutions 224

A wins
drayage-100-23 1.14 8.61
gfd-schedulen180f7d50m30k18 87.85 300.0
neos-827175 12.28 25.86
neos-1171448 28.85 98.2
n5-3 180.62 300.0
neos859080 0.78 1.38
seymour1 38.78 300.0
neos-860300 71.98 90.91
neos-3083819-nubu 30.48 300.0
neos-933966 248.26 300.0
neos-957323 46.36 300.0
neos-960392 50.12 115.68
netdiversion 121.59 300.0
ns1208400 162.15 300.0
nw04 39.89 45.98
piperout-27 8.22 14.09
supportcase7 30.48 300.0
supportcase6 70.22 300.0
uccase12 26.95 41.87
A wins 19

A losses
app1-1 2.3 0.74
cbs-cta 2.21 1.83
irp 17.59 12.94
istanbul-no-cutoff 18.26 13.56
mas76 300.0 20.98
neos-1122047 8.3 6.46
neos-1445765 1.86 1.52
neos-1582420 134.3 93.44
neos-3004026-krka 6.56 1.31
neos8 0.88 0.68
ns1952667 99.84 64.92
piperout-08 10.75 8.23
pk1 224.32 91.84
qap10 134.76 62.82
swath1 60.04 50.3
trento1 300.0 172.28
triptim1 246.65 130.92
A losses 17

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 29, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Jan 29, 2026

📝 Walkthrough

Walkthrough

Adds a full cuts subsystem and integrates cut generation/management into MIP branch-and-bound; expands MIP/simplex solver settings and parameter bindings; introduces CONCURRENT_HALT_RETURN and propagates concurrent-halt handling; extends basis-update, CSR/dense/sparse utilities, timer integration, build/test entries, and many public signatures.

Changes

Cohort / File(s) Summary
Cut subsystem
cpp/src/dual_simplex/cuts.hpp, cpp/src/dual_simplex/cuts.cpp
New templated cut framework: cut types, cut_info/pool, MIR/Gomory/knapsack/strong-CG generators, scoring/aging, add/remove/verify helpers, and explicit instantiations. Review algorithm correctness, workspace sizing, and template exports.
B&B integration & API surface
cpp/src/dual_simplex/branch_and_bound.hpp, cpp/src/dual_simplex/branch_and_bound.cpp, cpp/src/dual_simplex/solve.cpp, cpp/src/mip/solver.cu, cpp/src/mip/solve.cu, cpp/src/mip/diversity/...
Branch-and-bound now accepts a start/timer parameter and threads cut/basis/edge-norm state through many public/private signatures (root/node solves, reports). Validate call-site updates and ABI changes.
Build & tests
cpp/src/dual_simplex/CMakeLists.txt, cpp/tests/mip/CMakeLists.txt, cpp/tests/mip/cuts_test.cu
Added cuts.cpp to build and new CUDA unit tests for cuts. Confirm CMake/test integration and CI expectations.
Basis updates & factorization
cpp/src/dual_simplex/basis_updates.hpp, cpp/src/dual_simplex/basis_updates.cpp, cpp/src/dual_simplex/basis_solves.cpp, cpp/src/dual_simplex/crossover.cpp, cpp/src/dual_simplex/primal.cpp, cpp/src/dual_simplex/phase2.cpp
New basis_update_mpf_t::append_cuts, factorize/repair refactor to return status and propagate CONCURRENT_HALT_RETURN. Inspect repair retry logic and numeric/workspace handling.
Solver settings & constants
cpp/include/cuopt/linear_programming/constants.h, cpp/include/cuopt/linear_programming/mip/solver_settings.hpp, cpp/src/dual_simplex/simplex_solver_settings.hpp, cpp/src/math_optimization/solver_settings.cu, cpp/src/mip/solver.cu
Added many MIP/cut settings and macros (max_cut_passes, mir_cuts, mixed_integer_gomory_cuts, knapsack_cuts, strong_chvatal_gomory_cuts, reduced_cost_strengthening, cut_change_threshold, cut_min_orthogonality, node_limit, num_gpus, sub_mip, reliability_branching) and parameter bindings. Verify defaults and bounds.
Concurrent-halt/statuses
cpp/src/dual_simplex/types.hpp, many cpp/src/dual_simplex/* files
Introduced CONCURRENT_HALT_RETURN (-2) and standardized replacing magic -2/-3 returns with it across modules. Audit all callers and status mappings.
Sparse/dense utilities
cpp/src/dual_simplex/sparse_matrix.hpp, cpp/src/dual_simplex/sparse_matrix.cpp, cpp/src/dual_simplex/sparse_vector.hpp, cpp/src/dual_simplex/sparse_vector.cpp, cpp/src/dual_simplex/dense_matrix.hpp
Added CSR append_rows/append_row, sparse_vector-from-CSR-row ctor, dot, squeeze, dense_matrix fill constructor, and changed check_matrix to return status. Review resizing, memory growth, and validation behavior.
Presolve & bounds strengthening
cpp/src/dual_simplex/presolve.cpp, cpp/src/dual_simplex/bounds_strengthening.hpp, cpp/src/dual_simplex/bounds_strengthening.cpp
Removed temporary CSR-row materialization, added early CONCURRENT_HALT_RETURN returns, and reordered bounds_strengthening signature to accept settings and external bounds_changed. Update callers and tolerances.
MIP node & related
cpp/src/dual_simplex/mip_node.hpp, cpp/src/dual_simplex/solution.hpp
Added integer_infeasible to mip_node_t and propagated through constructors and branch creation. Confirm node initialization and propagation.
Timer & timing integration
cpp/src/utilities/timer.hpp, cpp/src/dual_simplex/pseudo_costs.cpp, cpp/src/mip/diversity/recombiners/sub_mip.cuh, cpp/src/mip/diversity/lns/rins.cu
Added timer_t::get_tic_start() and switched timing to tic/toc in several places; B&B/SubMIP now receive/start times. Check conversion precision and cross-platform behavior.
GPU/low-level adjustments
cpp/src/dual_simplex/sparse_cholesky.cuh, cpp/src/dual_simplex/right_looking_lu.cpp, other cuDSS call sites
Normalized error/halt returns: generic failures use -1, concurrent-halt uses CONCURRENT_HALT_RETURN. Verify GPU error paths and macro consistency.
Misc internal edits
multiple cpp/src/dual_simplex/*, python/*, tests
Status enum changes (primal), timing/log formatting tweaks, copyright year bumps, relaxed test tolerances, small logging/format updates. Scan for regressions and ABI impacts.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the primary change: adding cut-generation and cut-management functionality to the MIP solver.
Linked Issues check ✅ Passed All coding objectives from issue #698 are met: cut infrastructure (pool, add/remove, scoring), MIR/Gomory/CG/knapsack cuts, LP integration, steepest-edge handling, reduced-cost fixing, crossover/basis fixes, concurrent halt propagation, and solver options exposed.
Out of Scope Changes check ✅ Passed All changes directly support the stated objectives. Minor updates (copyright years, timer functionality, logging refinements) are necessary infrastructure supporting the core cut implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chris-maes chris-maes self-assigned this Jan 29, 2026
@chris-maes chris-maes added feature request New feature or request non-breaking Introduces a non-breaking change labels Jan 29, 2026
@chris-maes chris-maes added this to the 26.02 milestone Jan 29, 2026
@chris-maes
Copy link
Contributor Author

/ok to test 9ea0271

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/dual_simplex/basis_updates.cpp (1)

2274-2317: Fix incomplete error propagation from refactor_basis.

refactor_basis propagates -2 (lines 2284, 2317), but call sites handle it inconsistently:

  • phase2.cpp:2327, 3060, 3067: Check only > 0, missing negative return values
  • cuts.cpp:2682: Ignores return value entirely

Expected: all call sites check for -2 (and other error codes) and propagate appropriately, not just positive values. Use != 0 checks or explicit -2 handling where applicable.

cpp/src/dual_simplex/mip_node.hpp (1)

229-243: Missing integer_infeasible in detach_copy().

The detach_copy() method copies most node fields but does not copy integer_infeasible. If diving heuristics rely on this value, it could cause incorrect behavior or inconsistent logging.

🐛 Proposed fix
   mip_node_t<i_t, f_t> detach_copy() const
   {
     mip_node_t<i_t, f_t> copy(lower_bound, vstatus);
     copy.branch_var         = branch_var;
     copy.branch_dir         = branch_dir;
     copy.branch_var_lower   = branch_var_lower;
     copy.branch_var_upper   = branch_var_upper;
     copy.fractional_val     = fractional_val;
     copy.objective_estimate = objective_estimate;
     copy.node_id            = node_id;
+    copy.integer_infeasible = integer_infeasible;
     return copy;
   }
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.hpp`:
- Around line 154-159: Reads of original_lp_ fields (.lower, .upper, .obj_scale)
in diving_thread() and the various multithreaded heuristic routines must be
protected with the same mutex used for writes: acquire mutex_original_lp_ before
any access to original_lp_ and release it after the read; this applies to all
occurrences (e.g., the accesses noted in diving_thread() and the other
heuristic/thread functions that read original_lp_). Locate uses of original_lp_
in those functions, wrap each read (and any short sequence of dependent reads)
with a lock/unlock on mutex_original_lp_, and keep the critical section minimal
to avoid contention while ensuring consistent snapshot semantics matching the
existing locked writes.

In `@cpp/src/dual_simplex/crossover.cpp`:
- Around line 1381-1396: The dual-feasibility check is using phase-1 duals in
solution.z while vstatus reflects duals for the original LP; before calling
dual_infeasibility (or the gate that may call dual_phase2) restore or recompute
the original objective duals (y/z) rather than using the phase-1 solution.z, or
avoid overwriting solution.z when assigning phase1_solution; update the code
paths around phase1_solution, solution, dual_infeasibility, vstatus, and
dual_phase2 so that dual_infeasibility receives y/z corresponding to the
original LP objective (e.g., recompute duals for the original objective or
save/restore phase-1 duals prior to the gate).

In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 103-109: The cut_orthogonality_ vector is not being reset each
scoring pass because resize(cut_storage_.m, 1) won't overwrite existing entries
when the size is unchanged; change the reset logic to explicitly set all entries
to the default (e.g., use std::fill or assign) after sizing so
cut_orthogonality_ is fully overwritten for the current pass (refer to
cut_orthogonality_ and cut_storage_.m in the scoring function where
cut_distances_, cut_norms_, and cut_scores_ are resized).
- Around line 47-98: The distance and orthogonality computations can divide by
zero/near-zero norms; update cut_distance and cut_orthogonality to guard those
divisions by clamping any computed norm to a small positive epsilon before
dividing: in cut_distance, after computing cut_norm set something like
guarded_norm = std::max(cut_norm, eps) and use guarded_norm to compute distance;
in cut_orthogonality, compute guarded_norm_i = std::max(norm_i, eps) and
guarded_norm_j = std::max(norm_j, eps) and use those when dividing the absolute
dot; choose eps using a numeric limit for f_t (e.g.,
std::numeric_limits<f_t>::epsilon() or a small constant) so cut_norms_ and
cut_norm remain unchanged but divisions are stable.

In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 99-133: The function minimum_violation currently calls exit(1)
when it finds a non-cut; instead remove the process termination and return a
sentinel failure value (e.g., std::numeric_limits<f_t>::lowest() or -inf) so
callers can handle the error; update the loop in minimum_violation (which
computes Cx from C via to_compressed_col and matrix_vector_multiply into Cx and
compares to cut_rhs) to, upon detecting Cx[k] <= cut_rhs[k], optionally emit a
debug-only assertion or logging and then return the chosen sentinel (and do not
call exit), ensuring min_cut_violation is not used after this early return.

In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Line 412: Guard the use of PAPILO_GITHASH so builds without that macro still
compile: add a fallback definition (e.g. before the CUOPT_LOG_INFO call) by
checking `#ifndef` PAPILO_GITHASH and `#define` PAPILO_GITHASH "unknown" (or wrap
the CUOPT_LOG_INFO invocation in an `#ifdef/`#else that logs a safe string).
Target the CUOPT_LOG_INFO(...) call that references PAPILO_GITHASH and ensure
PAPILO_GITHASH is defined to a string literal fallback when absent.
🧹 Nitpick comments (9)
cpp/src/dual_simplex/sparse_matrix.cpp (1)

409-433: Consider adding column index bounds validation in append_row.

The append_row method does not validate that column indices in the input sparse vector are within bounds (c.i[k] < this->n). While the existing check_matrix method can detect this post-hoc, invalid indices could cause out-of-bounds access in downstream operations before validation occurs.

This is consistent with the approach in append_rows which validates C.n but not individual column indices. If this is intentional (relying on debug-only check_matrix validation per the codebase pattern), this can remain as-is.

cpp/tests/dual_simplex/unit_tests/solve.cpp (1)

329-478: Disabled test code should be enabled or removed.

This test block is wrapped in #if 0, so it will never compile or execute. Disabled test code adds maintenance burden without providing value. Consider:

  1. Enable the test if it's ready and the solve_linear_program_with_cuts function is available in this PR.
  2. Remove the code if it's not ready, and track it in an issue instead.
  3. Use a proper skip mechanism (e.g., GTEST_SKIP()) if the test should be temporarily disabled but still compile.
cpp/src/dual_simplex/basis_updates.cpp (1)

1114-1323: Add a debug‑only dimension check for cuts_basic.

append_cuts assumes cuts_basic.n == L0_.m. If violated, the Uᵀ solve and subsequent indexing are undefined. A CHECK_MATRIX‑guarded assert would catch misuse without impacting release performance.

🔧 Proposed fix
 i_t basis_update_mpf_t<i_t, f_t>::append_cuts(const csr_matrix_t<i_t, f_t>& cuts_basic)
 {
   const i_t m = L0_.m;
+#ifdef CHECK_MATRIX
+  assert(cuts_basic.n == m);
+#endif

Based on learnings: In the cuOPT dual simplex solver, CSR/CSC matrices (including the quadratic objective matrix Q) are required to have valid dimensions and indices by construction. Runtime bounds checking in performance-critical paths like matrix scaling is avoided to prevent slowdowns. Validation is performed via debug-only check_matrix() calls wrapped in #ifdef CHECK_MATRIX.

cpp/include/cuopt/linear_programming/mip/solver_settings.hpp (1)

81-95: Consider adding brief documentation for new MIP settings.

The new public data members control important MIP solver behavior. While these are internal headers per project conventions, brief inline comments would improve maintainability. For example:

i_t node_limit = ...;                // Maximum nodes to explore (max = unlimited)
i_t reliability_branching = -1;      // -1=auto, 0=off, >0=enabled
i_t max_cut_passes = 0;              // Number of cutting-plane rounds at root
i_t mir_cuts = -1;                   // -1=auto, 0=off, 1=on (Mixed Integer Rounding)
// etc.

The defaults and structure are consistent with simplex_solver_settings.hpp.

cpp/src/dual_simplex/branch_and_bound.cpp (4)

342-406: Verify numerical stability in reduced cost fixing calculations.

The function uses hardcoded thresholds (threshold = 1e-3, weaken = 1e-5) for bound tightening. Consider:

  1. Line 359: The !std::isfinite check correctly guards against NaN/Inf reduced costs, but a warning to printf bypasses the logger.
  2. Lines 370-373 and 381-384: The weaken factor loosens bounds slightly to avoid numerical issues, which is good practice.

The algorithm appears correct for reduced cost fixing. The printf at Line 360 should use the logger for consistency.

Use logger instead of printf
     if (!std::isfinite(reduced_costs[j])) {
-      printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]);
+      settings_.log.printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]);
       continue;
     }

408-458: Verify mutex ordering to prevent potential deadlocks.

The set_new_solution function acquires multiple mutexes: mutex_original_lp_ (Lines 411, 430, 433), and mutex_upper_ (Lines 423, 434, 450). While the current pattern appears safe (locks are released before acquiring new ones), the interleaved locking could be fragile.

Consider documenting the expected lock ordering or consolidating critical sections where possible to reduce complexity.


1856-1858: Remove debug timing output or make it conditional.

Multiple timing blocks use if (1 || ...) which always prints timing information:

  • Line 1856: if (1 || cut_generation_time > 1.0)
  • Line 1863: if (1 || score_time > 1.0)
  • Line 1918: if (1 || add_cuts_time > 1.0)
  • Line 1958: if (1 || node_presolve_time > 1.0)
  • Line 1989: if (1 || dual_phase2_time > 1.0)
  • Line 2025: if (1 || remove_cuts_time > 1.0)

These appear to be debug statements. Consider removing the 1 || to only log when operations exceed 1 second, or guard with a debug flag.

Remove unconditional timing output
-      if (1 || cut_generation_time > 1.0) {
+      if (cut_generation_time > 1.0) {
         settings_.log.printf("Cut generation time %.2f seconds\n", cut_generation_time);
       }

Apply similarly to the other occurrences.

Also applies to: 1862-1865, 1918-1920, 1958-1960, 1989-1991, 2025-2027


1873-1877: Remove or guard debug print statements.

Lines 1873-1877 contain unconditional debug output (#if 1 and direct printf). This should be removed or guarded with a debug macro for release builds.

Guard debug output
-#if 1
-      cut_pool.print_cutpool_types();
-      print_cut_types("In LP      ", cut_types, settings_);
-      printf("Cut pool size: %d\n", cut_pool.pool_size());
-#endif
+#ifdef DEBUG_CUTS
+      cut_pool.print_cutpool_types();
+      print_cut_types("In LP      ", cut_types, settings_);
+      settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size());
+#endif
cpp/src/dual_simplex/cuts.cpp (1)

2345-2685: Remove unused solution parameter from add_cuts().

The solution parameter is declared but never used in the function body, creating confusion about responsibilities. The caller (branch_and_bound.cpp) independently resizes root_relax_soln_ post-call anyway (lines 1963–1965), so the parameter serves no purpose. Either remove it from the signature or, if the intent is for add_cuts() to manage solution resizing, use it within the function to resize for the expanded problem. Current design creates maintenance risk without providing value.

@chris-maes
Copy link
Contributor Author

/ok to test 4c086ac

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 770-772: The loop currently exits completely when
negate_inequality == -1 (the break on that condition), which prevents generating
MIR cuts for subsequent rows; change the control flow to skip the current
iteration instead of breaking the loop by replacing the break with continue so
other rows are still processed (locate the check of negate_inequality in
cuts.cpp where the TODO comment sits and make the single-line replacement from
break to continue).
- Around line 431-436: In greedy_knapsack_problem, protect the division
ratios[i] = values[i] / weights[i] against zero (or near-zero) weights: detect
if weights[i] is zero (or fabs(weights[i]) < epsilon) and set ratios[i]
appropriately (e.g., +infinity when weight==0 and value>0, -infinity when
value<0, or 0 when value==0) using std::numeric_limits<f_t>::infinity();
otherwise compute the normal division; update the ratios vector calculation to
use this guarded logic so no division-by-zero occurs.
- Around line 2761-2788: The function write_solution_for_cut_verification
currently opens a raw FILE* (fid) and may leak it if any operation throws or
early-returns; replace the raw FILE* use with an RAII wrapper (e.g.,
std::unique_ptr<FILE, decltype(&fclose)> or a small scope-guard) so fclose is
guaranteed on all exit paths, or switch to std::ofstream; update the code that
writes to fid (fprintf calls) to use the chosen RAII-managed stream or the
managed FILE pointer while preserving behavior in
write_solution_for_cut_verification and ensure fclose is removed from manual
calls.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (1)

442-470: Verify parameter ordering consistency in add_cuts and remove_cuts.

The add_cuts and remove_cuts functions have different parameter orderings (e.g., settings position, Arow presence). While not incorrect, this inconsistency could lead to confusion at call sites.

cpp/src/dual_simplex/cuts.cpp (1)

361-363: NaN comparison for early return is fragile.

The check objective != objective is an idiomatic NaN test, but this pattern can be optimized away by aggressive compilers with -ffast-math. Consider using std::isnan() for clarity and robustness.

♻️ Suggested fix
-  if (objective != objective) { return -1; }
+  if (std::isnan(objective)) { return -1; }

@chris-maes
Copy link
Contributor Author

chris-maes commented Jan 29, 2026

/ok to test 606c482

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 323-339: The division that computes iter_node uses
exploration_stats_.total_lp_iters / nodes_explored and can divide by zero when
nodes_explored is 0; modify the report code around iter_node so you check
nodes_explored (the i_t variable nodes_explored) before dividing: if
nodes_explored > 0 compute iter_node = exploration_stats_.total_lp_iters /
static_cast<f_t>(nodes_explored) else set iter_node to 0.0 (or another safe
sentinel), then use iter_node in the printf; this prevents a crash and preserves
numeric stability in report/branch_and_bound.cpp.
- Around line 351-367: The loop performing reduced‑cost fixing must skip any
tightening when the current gap is non‑positive; add a guard at the start of the
per‑column logic (inside the for over reduced_costs in branch_and_bound.cpp)
that checks upper_bound <= root_obj (or abs_gap <= 0) and continues to next j if
true, so that compute_objective(root_obj) and abs_gap are validated before using
them to tighten bounds (symbols to update: root_obj, upper_bound, abs_gap,
reduced_costs loop and the per‑column tightening logic).
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.cpp (2)

16-44: Track cut deduplication TODO.

The TODO indicates planned dedup logic in a hot path; consider creating a follow-up issue to avoid pool bloat.

Would you like me to draft an issue template or a minimal dedup sketch?


225-229: drop_cuts() is still a stub.

If this is intentionally deferred, please track it; otherwise I can help sketch a safe policy.

Would you like me to propose a simple age/score-based eviction policy?

@chris-maes
Copy link
Contributor Author

/ok to test 606c482

@chris-maes
Copy link
Contributor Author

/ok to test fb50454

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 148-167: Protect the division in cut_orthogonality by checking
norms before dividing: inside the cut_orthogonality(i_t i, i_t j)
implementation, compute norm_i and norm_j as before but if either is below
settings_.zero_tol treat the pair as maximally non-aligned by returning 1.0;
otherwise compute and return 1.0 - std::abs(dot) / (norm_i * norm_j). This uses
the existing settings_.zero_tol guard to avoid dividing by near-zero values and
preserves current return semantics.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (2)

65-91: Consider reusing cut_info_t to avoid duplicate counting logic.

The print_cut_types function duplicates the cut-type counting logic that already exists in cut_info_t::record_cut_types. This could be simplified:

♻️ Suggested refactor
 template <typename i_t, typename f_t>
 void print_cut_types(const std::string& prefix,
                      const std::vector<cut_type_t>& cut_types,
                      const simplex_solver_settings_t<i_t, f_t>& settings)
 {
-  i_t num_gomory_cuts   = 0;
-  i_t num_mir_cuts      = 0;
-  i_t num_knapsack_cuts = 0;
-  i_t num_cg_cuts       = 0;
-  for (i_t i = 0; i < cut_types.size(); i++) {
-    if (cut_types[i] == cut_type_t::MIXED_INTEGER_GOMORY) {
-      num_gomory_cuts++;
-    } else if (cut_types[i] == cut_type_t::MIXED_INTEGER_ROUNDING) {
-      num_mir_cuts++;
-    } else if (cut_types[i] == cut_type_t::KNAPSACK) {
-      num_knapsack_cuts++;
-    } else if (cut_types[i] == cut_type_t::CHVATAL_GOMORY) {
-      num_cg_cuts++;
-    }
-  }
+  cut_info_t<i_t, f_t> info;
+  // Note: record_cut_types takes non-const ref, may need adjustment
+  auto mutable_types = cut_types;
+  info.record_cut_types(mutable_types);
   settings.log.printf("%s: Gomory cuts: %d, MIR cuts: %d, Knapsack cuts: %d CG cuts: %d\n",
                       prefix.c_str(),
-                      num_gomory_cuts,
-                      num_mir_cuts,
-                      num_knapsack_cuts,
-                      num_cg_cuts);
+                      info.num_gomory_cuts,
+                      info.num_mir_cuts,
+                      info.num_knapsack_cuts,
+                      info.num_cg_cuts);
 }

Also, the loop at line 74 compares signed i_t with unsigned size_t from cut_types.size(), which may trigger compiler warnings. Consider using size_t for the loop index or casting.


267-304: Consider documenting the workspace vector invariants.

The tableau_equality_t class maintains several workspace vectors (b_bar_, x_workspace_, c_workspace_, etc.) and marking arrays (nonbasic_mark_, x_mark_). Consider adding brief comments documenting when these workspaces are valid/dirty, especially since generate_base_equality likely modifies them and may require reset between calls.

@chris-maes chris-maes mentioned this pull request Feb 3, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)

243-277: ⚠️ Potential issue | 🟡 Minor

Debug code references out-of-scope variable problem.

When DEBUG_BOUND_STRENGTHENING is enabled, the code references problem.lower and problem.upper, but problem is not accessible in this method's scope (only A is stored as a member). This would cause a compilation error if debug is enabled.

🔧 Suggested fix: use parameter bounds instead
   for (i_t i = 0; i < n; ++i) {
-    if (lower[i] > problem.lower[i] + settings.primal_tol ||
-        (!std::isfinite(problem.lower[i]) && std::isfinite(lower[i]))) {
+    if (lower[i] > lower_bounds[i] + settings.primal_tol ||
+        (!std::isfinite(lower_bounds[i]) && std::isfinite(lower[i]))) {
       num_lb_changed++;
       lb_change +=
-        std::isfinite(problem.lower[i])
-          ? (lower[i] - problem.lower[i]) / (1e-6 + std::max(abs(lower[i]), abs(problem.lower[i])))
+        std::isfinite(lower_bounds[i])
+          ? (lower[i] - lower_bounds[i]) / (1e-6 + std::max(abs(lower[i]), abs(lower_bounds[i])))
           : 1.0;
     }
-    if (upper[i] < problem.upper[i] - settings.primal_tol ||
-        (!std::isfinite(problem.upper[i]) && std::isfinite(upper[i]))) {
+    if (upper[i] < upper_bounds[i] - settings.primal_tol ||
+        (!std::isfinite(upper_bounds[i]) && std::isfinite(upper[i]))) {
       num_ub_changed++;
       ub_change +=
-        std::isfinite(problem.upper[i])
-          ? (problem.upper[i] - upper[i]) / (1e-6 + std::max(abs(problem.upper[i]), abs(upper[i])))
+        std::isfinite(upper_bounds[i])
+          ? (upper_bounds[i] - upper[i]) / (1e-6 + std::max(abs(upper_bounds[i]), abs(upper[i])))
           : 1.0;
     }
   }

Note: Since lower_bounds and upper_bounds are modified at lines 280-281, you may need to save the original values at the start of the method if you want to compare against the input bounds.

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/bounds_strengthening.cpp`:
- Around line 92-96: The function bounds_strengthening in
bounds_strengthening.cpp uses the parameter name bounds_changed (plural) but the
header declares bound_changed (singular); rename the parameter to bound_changed
to match the header and update all internal references (the usages around the
current checks at the spots corresponding to the previous lines 105 and 108) to
use bound_changed so the implementation and declaration are consistent.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)

15-27: Consider guarding against near-zero coefficients for numerical stability.

The update_lb and update_ub functions divide by coeff without checking for near-zero values. While sparse matrix entries should be meaningfully non-zero, division by very small coefficients could produce extremely large bounds that may cause numerical instability downstream.

🛡️ Optional: Add near-zero coefficient guard
 template <typename f_t>
 static inline f_t update_lb(f_t curr_lb, f_t coeff, f_t delta_min_act, f_t delta_max_act)
 {
+  constexpr f_t coeff_tol = 1e-12;
+  if (std::abs(coeff) < coeff_tol) { return curr_lb; }
   auto comp_bnd = (coeff < 0.) ? delta_min_act / coeff : delta_max_act / coeff;
   return std::max(curr_lb, comp_bnd);
 }

 template <typename f_t>
 static inline f_t update_ub(f_t curr_ub, f_t coeff, f_t delta_min_act, f_t delta_max_act)
 {
+  constexpr f_t coeff_tol = 1e-12;
+  if (std::abs(coeff) < coeff_tol) { return curr_ub; }
   auto comp_bnd = (coeff < 0.) ? delta_max_act / coeff : delta_min_act / coeff;
   return std::min(curr_ub, comp_bnd);
 }

Based on learnings: guard near-zero norms in cut-related computations for numerical stability.

@chris-maes
Copy link
Contributor Author

/ok to test 93e8135

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/bounds_strengthening.hpp`:
- Around line 23-24: The documentation comment uses the wrong parameter name
"bound_changed" — update it to match the actual parameter name "bounds_changed"
(plural) used in the function declaration in bounds_strengthening.hpp so the
comment reads: "If bounds_changed is empty, then all constraints are scanned for
changes. Otherwise, bounds_changed must be a vector of length n, where n is the
number of variables." Ensure the comment and the parameter name "bounds_changed"
are consistent.
- Around line 25-28: The function declaration for bounds_strengthening must
validate the size of the bounds_changed vector before it is used; add an
explicit check in the implementation of bounds_strengthening that if
(!bounds_changed.empty() && bounds_changed.size() != A.n) then fail fast (e.g.,
return false or throw a clear std::invalid_argument) to avoid out-of-bounds
access when the loop reads bounds_changed[j]; locate the check in the
bounds_strengthening implementation (the loop referencing bounds_changed at
lines ~107-116) and perform this validation prior to entering that loop, using
the same error handling style as the function (returning failure or raising an
error consistent with surrounding code).

Comment on lines +25 to +28
bool bounds_strengthening(const simplex_solver_settings_t<i_t, f_t>& settings,
const std::vector<bool>& bounds_changed,
std::vector<f_t>& lower_bounds,
std::vector<f_t>& upper_bounds);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find the implementation file for bounds_strengthening
fd -e cpp . | grep -i bounds_strengthening

Repository: NVIDIA/cuopt

Length of output: 101


🏁 Script executed:

cat -n cpp/src/dual_simplex/bounds_strengthening.cpp

Repository: NVIDIA/cuopt

Length of output: 11372


🏁 Script executed:

cat -n cpp/src/dual_simplex/bounds_strengthening.hpp

Repository: NVIDIA/cuopt

Length of output: 1922


Add size validation for bounds_changed parameter.

The documented contract requires bounds_changed to have length equal to the number of variables when non-empty. The implementation accesses bounds_changed[j] in the loop (line 107-116) without validating that bounds_changed.size() == A.n, which allows out-of-bounds access on misuse. Add explicit size validation before the loop to fail fast with a clear error rather than undefined behavior.

🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/bounds_strengthening.hpp` around lines 25 - 28, The
function declaration for bounds_strengthening must validate the size of the
bounds_changed vector before it is used; add an explicit check in the
implementation of bounds_strengthening that if (!bounds_changed.empty() &&
bounds_changed.size() != A.n) then fail fast (e.g., return false or throw a
clear std::invalid_argument) to avoid out-of-bounds access when the loop reads
bounds_changed[j]; locate the check in the bounds_strengthening implementation
(the loop referencing bounds_changed at lines ~107-116) and perform this
validation prior to entering that loop, using the same error handling style as
the function (returning failure or raising an error consistent with surrounding
code).

@chris-maes
Copy link
Contributor Author

/ok to test f2134da

Copy link
Contributor

@akifcorduk akifcorduk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could not review the specific cut generations in detail because of lack of theory, but rest looks good.

}
if (add_cuts_status != 0) {
settings_.log.printf("Failed to add cuts\n");
return mip_status_t::NUMERICAL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we returning numerical if we can't add cuts? Can't we just continue with original problem?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we would drop the cuts and continue. I have some code that attempted this, but it is not trivial to remove cuts. I will need to add that in a follow up PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we use a copy of the problem before cuts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not just the problem, there is quite a bit of additional state.

bool presolve_success = dm.run_presolve(timer_.remaining_time());
dm.timer = timer_;
const bool run_presolve = context.settings.presolve;
bool presolve_success = run_presolve ? dm.run_presolve(timer_.remaining_time()) : true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you think it makes sense to have a separation between third party and internal presolve? Internal presolve has other important components like probing cache that impacts heuristic quality.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think when you want presolve off, you want all presolve off. But we can follow up and make presolve an int. So if presolve = 0 it's off, presolve = 1 cuopt internal, presolve = 2 cuopt internal + paplio, presolve = -1 automatic

Copy link
Contributor

@aliceb-nv aliceb-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work Chris 😊 I like this work estimate system you're using in the cuts generation.
Very minor nitpicks, approved

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/include/cuopt/linear_programming/mip/solver_settings.hpp`:
- Around line 86-99: The struct layout in solver_settings (fields like
num_cpu_threads, max_cut_passes, mir_cuts, mixed_integer_gomory_cuts,
knapsack_cuts, strong_chvatal_gomory_cuts, reduced_cost_strengthening,
cut_change_threshold, cut_min_orthogonality, mip_batch_pdlp_strong_branching,
num_gpus) was modified mid‑struct which breaks ABI; to fix, do not insert new
fields in the middle—either move these new fields to the very end of the public
struct, or replace them with a fixed-size reserved/padding array (e.g.,
reserved[/*N*/]) to preserve layout and expose new fields via a versioned
settings struct or accessor API; ensure default values are preserved and add a
comment documenting the versioning/reserved usage so binary clients remain
compatible.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/branch_and_bound.hpp (2)

130-132: Document bounds vector context for reduced‑cost fixing.

Please clarify whether lower_bounds/upper_bounds are in the original LP space (and whether slacks are included) to avoid index-mapping mistakes.

As per coding guidelines: Ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations.


159-164: Clarify mutex scope to the root cut‑pass window.

The current comment implies heuristic threads must always lock, but the mutex is intended specifically for the root cut‑pass phase when set_new_solution() can race with original_lp_ modification. Consider tightening the comment to match the intended usage window.

Proposed comment refinement
-  // The heuristics threads look at the original LP. But the main thread modifies the
-  // size of the original LP by adding slacks for cuts. Heuristic threads should lock
-  // this mutex when accessing the original LP. The main thread should lock this mutex
-  // when modifying the original LP.
+  // The main thread modifies original_lp_ during the root cut‑pass phase.
+  // set_new_solution() can be called concurrently in that phase, so access to
+  // original_lp_ must be protected by this mutex then. After the OpenMP barrier,
+  // original_lp_ is fixed and heuristic threads can read it without locking.

Based on learnings: In cpp/src/dual_simplex/branch_and_bound.hpp and branch_and_bound.cpp, the mutex_original_lp_ protects original_lp_ specifically during the root cut pass phase. The public API set_new_solution() can be called from external threads during this phase while the main thread is adding cuts/slacks to original_lp_. Functions like best_first_thread, exploration_ramp_up, and plunge_from execute after the cut passes complete (after the OpenMP parallel barrier) when original_lp_ is fixed, so they don't need mutex protection for their reads.

@chris-maes
Copy link
Contributor Author

/ok to test e3cd67e

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/basis_updates.cpp`:
- Around line 1183-1192: The division theta = dot / mu is unsafe when
mu_values_[k] is near zero; in function(s) performing this (referencing
mu_values_, mu, theta, dot_product, and add_sparse_column) add a guard that
checks std::abs(mu) against the same stability threshold used in update() (e.g.,
zero_tol) before dividing; if |mu| is below the threshold, either skip the V
update (do not call add_sparse_column) and log/flag this event or
assert/fallback to the stable refactor path so you avoid producing an inflated
theta and corrupting the basis.
🧹 Nitpick comments (5)
cpp/src/dual_simplex/bound_flipping_ratio_test.hpp (1)

16-19: Use constexpr or enum class instead of macros for proper namespace scoping.

Preprocessor macros are resolved before compilation and don't respect C++ namespace scope. These RATIO_TEST_* macros will pollute the global namespace despite being defined within cuopt::linear_programming::dual_simplex. Using constexpr constants or an enum class provides type safety and proper namespace containment.

Additionally, while CONCURRENT_HALT_RETURN is available through transitive inclusion (initial_basis.hpp and simplex_solver_settings.hpp both include types.hpp), relying on transitive includes is fragile—any changes to those header includes could break the chain. Consider directly including types.hpp or defining CONCURRENT_HALT_RETURN locally if this header is intended to be self-contained.

♻️ Proposed refactor using constexpr
 namespace cuopt::linear_programming::dual_simplex {

-#define RATIO_TEST_NO_ENTERING_VARIABLE -1
-#define RATIO_TEST_CONCURRENT_LIMIT     CONCURRENT_HALT_RETURN  // -2
-#define RATIO_TEST_TIME_LIMIT           -3
-#define RATIO_TEST_NUMERICAL_ISSUES     -4
+inline constexpr int RATIO_TEST_NO_ENTERING_VARIABLE = -1;
+inline constexpr int RATIO_TEST_CONCURRENT_LIMIT     = CONCURRENT_HALT_RETURN;  // -2
+inline constexpr int RATIO_TEST_TIME_LIMIT           = -3;
+inline constexpr int RATIO_TEST_NUMERICAL_ISSUES     = -4;

Or alternatively, use a scoped enum:

enum class ratio_test_status_t : int {
  no_entering_variable = -1,
  concurrent_limit     = -2,
  time_limit           = -3,
  numerical_issues     = -4
};
cpp/src/dual_simplex/phase2.cpp (2)

1982-2005: Use solver tolerances instead of a hard-coded 1e-10.
A fixed 1e-10 can be too strict for scaled problems; using a settings tolerance keeps behavior consistent with other feasibility checks.

♻️ Suggested adjustment
-  f_t tol     = 1e-10;
+  const f_t tol = settings.zero_tol;  // or settings.dual_tol, depending on intent
As per coding guidelines, check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks.

2482-2490: Clarify that logged “Sum Inf.” is squared.
This line prints primal_infeasibility_squared; consider renaming the column (e.g., “Sum Inf.^2”) or logging the unsquared value to avoid confusion.

cpp/src/dual_simplex/cuts.hpp (2)

9-16: Make this header self-contained by adding missing standard headers.

This file uses std::array, std::vector, std::string, std::iota, std::sort, and int8_t but only includes <cmath>. Relying on transitive includes is brittle and can break builds when include order changes.

🔧 Proposed include additions
 `#include` <dual_simplex/user_problem.hpp>
 
+#include <algorithm>
+#include <array>
+#include <cstdint>
 `#include` <cmath>
+#include <numeric>
+#include <string>
+#include <vector>

272-275: Avoid copying nonbasic_list in the tableau_equality_t constructor.

nonbasic_list can be large; passing by value adds an unnecessary copy.

♻️ Proposed change
   tableau_equality_t(const lp_problem_t<i_t, f_t>& lp,
                      basis_update_mpf_t<i_t, f_t>& basis_update,
-                     const std::vector<i_t> nonbasic_list)
+                     const std::vector<i_t>& nonbasic_list)

@chris-maes
Copy link
Contributor Author

/ok to test 2bf5f9d

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 55-79: The printf calls in print_cut_info and print_cut_types use
"%d" to print cut_info.num_cuts[i] which has type i_t; change those format
specifiers to "%lld" and cast the value to (long long) (prefer using
static_cast<long long>(cut_info.num_cuts[i])) in each settings.log.printf call
so printing is safe when i_t is a 64-bit integer; update every occurrence (both
in print_cut_info and print_cut_types) where num_cuts[i] is passed to
settings.log.printf.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/bound_flipping_ratio_test.hpp (1)

17-21: Consider replacing macros with constexpr or enum for namespace-scoped constants.

These #define directives are placed inside the namespace block, but preprocessor macros ignore namespace boundaries and will pollute the global preprocessor namespace. This can lead to name collisions and unexpected macro substitutions elsewhere in the codebase.

Using constexpr constants or an enum class would provide proper namespace scoping and type safety.

♻️ Proposed refactor using constexpr
 namespace cuopt::linear_programming::dual_simplex {

-#define RATIO_TEST_NO_ENTERING_VARIABLE -1
-#define RATIO_TEST_CONCURRENT_LIMIT     CONCURRENT_HALT_RETURN  // -2
-#define RATIO_TEST_TIME_LIMIT           -3
-#define RATIO_TEST_NUMERICAL_ISSUES     -4
+// Return codes for ratio test
+template <typename i_t>
+struct ratio_test_status {
+  static constexpr i_t no_entering_variable = -1;
+  static constexpr i_t concurrent_limit     = CONCURRENT_HALT_RETURN;  // -2
+  static constexpr i_t time_limit           = -3;
+  static constexpr i_t numerical_issues     = -4;
+};

Note: If CONCURRENT_HALT_RETURN is itself a macro from types.hpp, that dependency would need to be addressed first for the constexpr approach to work cleanly.

cpp/src/dual_simplex/cuts.hpp (1)

35-53: Add a defensive guard for invalid cut types.

Line 48 indexes num_cuts without validating cut_type. If MAX_CUT_TYPE (or any invalid value) slips into the vector, this becomes out-of-bounds. Consider skipping or asserting invalid values.

🔧 Suggested guard
   void record_cut_types(const std::vector<cut_type_t>& cut_types)
   {
     for (cut_type_t cut_type : cut_types) {
-      num_cuts[static_cast<int>(cut_type)]++;
+      const int idx = static_cast<int>(cut_type);
+      if (idx >= 0 && idx < MAX_CUT_TYPE) {
+        num_cuts[idx]++;
+      } else {
+        // Optional: assert or log unexpected cut type
+      }
     }
   }

Comment on lines +55 to +79
template <typename i_t, typename f_t>
void print_cut_info(const simplex_solver_settings_t<i_t, f_t>& settings,
const cut_info_t<i_t, f_t>& cut_info)
{
if (cut_info.has_cuts()) {
for (i_t i = 0; i < MAX_CUT_TYPE; i++) {
settings.log.printf("%s cuts : %d\n", cut_info.cut_type_names[i], cut_info.num_cuts[i]);
}
}
}

template <typename i_t, typename f_t>
void print_cut_types(const std::string& prefix,
const std::vector<cut_type_t>& cut_types,
const simplex_solver_settings_t<i_t, f_t>& settings)
{
cut_info_t<i_t, f_t> cut_info;
cut_info.record_cut_types(cut_types);
settings.log.printf("%s: ", prefix.c_str());
for (i_t i = 0; i < MAX_CUT_TYPE; i++) {
settings.log.printf("%s cuts: %d ", cut_info.cut_type_names[i], cut_info.num_cuts[i]);
if (i < MAX_CUT_TYPE - 1) { settings.log.printf(", "); }
}
settings.log.printf("\n");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the actual file to confirm the code at lines 55-79
cat -n cpp/src/dual_simplex/cuts.hpp | sed -n '55,79p'

Repository: NVIDIA/cuopt

Length of output: 1158


🏁 Script executed:

# Search for i_t typedef/using definitions in the codebase
rg -n 'using\s+i_t\s*=|typedef\s+.*\si_t\b' --type=cpp --type=hpp --type=h -A 2

Repository: NVIDIA/cuopt

Length of output: 83


🏁 Script executed:

# Search for log.printf signature and logging implementation
rg -n 'log\.printf|struct.*logger|class.*log' --type=cpp --type=hpp --type=h -B 1 -A 3 | head -100

Repository: NVIDIA/cuopt

Length of output: 83


🏁 Script executed:

# Check for existing printf usage with i_t in the codebase
rg -n 'printf.*%d.*num_cuts|printf.*%lld.*num_cuts' --type=cpp --type=hpp --type=h

Repository: NVIDIA/cuopt

Length of output: 83


🏁 Script executed:

# Find cut_info_t definition to see what num_cuts is
rg -n 'struct cut_info_t|class cut_info_t' -A 20 | head -80

Repository: NVIDIA/cuopt

Length of output: 1369


🏁 Script executed:

# Find i_t typedef in the codebase
rg -n 'using i_t|typedef.*i_t' -A 2 | head -60

Repository: NVIDIA/cuopt

Length of output: 3955


🏁 Script executed:

# Find the log field type and printf signature
rg -n 'struct.*solver_settings|class.*solver_settings' -A 30 | head -100

Repository: NVIDIA/cuopt

Length of output: 8277


🏁 Script executed:

# Check if there's a logger or log class definition
rg -n 'printf.*\(' | grep -E 'log\.|logger' | head -20

Repository: NVIDIA/cuopt

Length of output: 2061


🏁 Script executed:

# Search for all template instantiations of solver_settings_t
rg -n 'template class.*solver_settings_t|instantiate.*solver_settings_t' -A 2

Repository: NVIDIA/cuopt

Length of output: 1974


🏁 Script executed:

# Check if i_t is ever defined as a 64-bit type
rg -n 'using i_t\s*=|typedef.*(long long|int64|int32).*i_t' -A 1

Repository: NVIDIA/cuopt

Length of output: 606


🏁 Script executed:

# Search for cuopt_int_t definition to understand the type mapping
rg -n 'cuopt_int_t|using cuopt_int_t' -A 2 | head -40

Repository: NVIDIA/cuopt

Length of output: 3271


🏁 Script executed:

# Check print_cut_info and print_cut_types usages to see with what types they're called
rg -n 'print_cut_info|print_cut_types' -B 2 -A 2

Repository: NVIDIA/cuopt

Length of output: 2176


Use format specifiers compatible with template parameter i_t.

Lines 61 and 75 use %d to print num_cuts[i], which is of type i_t. Although current template instantiations use int, the template is generic and could be instantiated with 64-bit types (e.g., int64_t), causing undefined behavior with %d. Use a cast and %lld to ensure type-safe formatting:

Suggested fix
-      settings.log.printf("%s cuts : %d\n", cut_info.cut_type_names[i], cut_info.num_cuts[i]);
+      settings.log.printf("%s cuts : %lld\n",
+                          cut_info.cut_type_names[i],
+                          static_cast<long long>(cut_info.num_cuts[i]));
...
-    settings.log.printf("%s cuts: %d ", cut_info.cut_type_names[i], cut_info.num_cuts[i]);
+    settings.log.printf("%s cuts: %lld ",
+                        cut_info.cut_type_names[i],
+                        static_cast<long long>(cut_info.num_cuts[i]));
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.hpp` around lines 55 - 79, The printf calls in
print_cut_info and print_cut_types use "%d" to print cut_info.num_cuts[i] which
has type i_t; change those format specifiers to "%lld" and cast the value to
(long long) (prefer using static_cast<long long>(cut_info.num_cuts[i])) in each
settings.log.printf call so printing is safe when i_t is a 64-bit integer;
update every occurrence (both in print_cut_info and print_cut_types) where
num_cuts[i] is passed to settings.log.printf.

@chris-maes
Copy link
Contributor Author

/ok to test 039a729

@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 4, 2026

/ok to test 039a729

@chris-maes, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

@chris-maes
Copy link
Contributor Author

/ok to test 6188622

@chris-maes
Copy link
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit 701db7e into NVIDIA:release/26.02 Feb 5, 2026
291 of 294 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants