Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
📝 WalkthroughWalkthroughAdds a large clique-based presolve subsystem, exposes host→device problem synchronization in presolve and solver flows, extends CSR matrix API with row insertion and constraint-range accessors, updates build lists and include paths, and applies several small presolve/util and docs metadata edits. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@akifcorduk is this still relevant? |
There was a problem hiding this comment.
Actionable comments posted: 14
🤖 Fix all issues with AI agents
In @.github/workflows/build.yaml:
- Line 47: Replace all reusable workflow references that use the floating ref
"@main" with a pinned release tag or commit SHA; specifically update the uses
entries like "uses:
rapidsai/shared-workflows/.github/workflows/conda-cpp-build.yaml@main" (and the
other similar uses lines noted) to a stable tag or SHA (e.g., `@vX.Y.Z` or
@<commit-sha>) so the build.yaml and related workflow invocations become
deterministic; ensure you update every occurrence listed in the comment (lines
referencing conda-cpp-build.yaml, wheels-publish.yaml, etc.) to the chosen
pinned ref.
In @.github/workflows/pr.yaml:
- Line 37: Replace all instances of reusable workflow references that end with
`@main` (e.g., the uses line "uses:
rapidsai/shared-workflows/.github/workflows/pr-builder.yaml@main" and the other
14 similar entries) with a specific release tag or commit SHA from the
rapidsai/shared-workflows repo; update each occurrence to the chosen stable tag
or SHA, ensure you pick the intended release for each workflow, and verify CI
runs still pass after changing them.
In @.github/workflows/test.yaml:
- Around line 29-31: Replace all `uses:` references that currently point to
`@main` with a pinned tag or commit SHA to make CI deterministic; specifically
update the reusable workflow invocations for conda-cpp-tests,
conda-python-tests, wheels-test (both occurrences) and custom-job to reference a
stable release tag or commit hash (e.g., replace
`rapidsai/shared-workflows/.github/workflows/conda-cpp-tests.yaml@main` with the
same path at a chosen tag or SHA), and ensure each `uses:` line in the file is
changed accordingly so you deliberately control upgrades of the shared
workflows.
In @.github/workflows/trigger-breaking-change-alert.yaml:
- Line 18: Replace the unpinned reusable workflow reference that currently uses
"rapidsai/shared-workflows/.github/workflows/breaking-change-alert.yaml@main"
with a pinned release tag or commit SHA; update the "uses:" value to point to a
specific tag (e.g., `@vX.Y.Z`) or a commit SHA to ensure immutable behavior and
supply-chain safety, and commit the change so the workflow invocation is no
longer referencing `@main`.
In `@cpp/src/dual_simplex/sparse_matrix.cpp`:
- Around line 597-607: Add a debug assertion in csr_matrix_t<i_t,
f_t>::insert_row to ensure the input vectors have the same length so j and x
remain aligned: verify vars.size() == coeffs.size() (similar to the assert used
in append_column), and fail fast if they differ before mutating row_start, m,
nz_max, j, or x.
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu`:
- Around line 486-499: Compute the number of removed constraints before calling
erase instead of using the invalidated iterator new_end; for example, calculate
n_of_removed_constraints by calling std::distance(problem.row_sense.begin(),
new_end) or by counting true entries in removal_marker, then call
problem.row_sense.erase(new_end, problem.row_sense.end()) and log
n_of_removed_constraints; ensure the same approach is used for problem.rhs if
needed and reference the existing variables new_end, problem.row_sense.erase,
removal_marker, and n_of_removed_constraints to locate where to change the code.
- Around line 142-150: The current binary detection loop sets all_binary by
checking problem.var_types[...] == dual_simplex::variable_type_t::INTEGER which
misses other discrete/binary kinds; update the predicate in the loop (the block
that sets all_binary inside the for over constraint_range using A.j[j]) to treat
any non-continuous variable with bounds [0,1] as binary (i.e., replace the
specific INTEGER check with a test that var_types[...] !=
dual_simplex::variable_type_t::CONTINUOUS and retain the lower==0 && upper==1
checks) so clique detection runs for true binaries.
- Around line 309-329: Negated literals are currently handled incorrectly: when
var_idx >= problem.num_cols we set coeff = -1 but then do rhs_offset -= coeff
which increases RHS; instead apply the negated coefficient to the RHS by adding
coeff (i.e. change rhs_offset -= coeff to rhs_offset += coeff) so that a negated
literal (coeff == -1) reduces rhs by 1; locate the block around rhs_offset,
coeff, var_idx and update the arithmetic before computing rhs and inserting via
A.insert_row / problem.rhs.
- Around line 294-299: In clique_table_t::check_adjacency the code mistakenly
treats var indices as clique IDs by calling
var_clique_map_first[var_idx1].count(var_idx2) etc.; fix it by (1)
early-guarding self-adjacency (if var_idx1 == var_idx2 return false or true
according to intended semantics — likely false to avoid self-links), (2) keep
the direct adjacency check using
adj_list_small_cliques[var_idx1].count(var_idx2), and (3) replace the
var_clique_map_* checks with a proper intersection test between clique-ID sets
for the two variables: iterate the smaller of var_clique_map_first[var_idx1] ∪
var_clique_map_addtl[var_idx1] and var_clique_map_first[var_idx2] ∪
var_clique_map_addtl[var_idx2] and test membership in the other’s sets (use
count on the appropriate var_clique_map_* sets); return true if any shared
clique ID is found. Ensure you reference and use var_clique_map_first,
var_clique_map_addtl and adj_list_small_cliques in the corrected logic.
- Around line 31-77: In find_cliques_from_constraint: avoid accessing
kc.entries[k-1] when k==0 and prevent original_clique_start_idx from being
negative/creating an invalid lower_bound range. Fix by changing the first while
to require k>=1 (so kc.entries[k-1] is safe) and after computing
original_clique_start_idx, if it < 0 or equals 0 (clique spans whole constraint)
return/short-circuit so you don't use
kc.entries.begin()+original_clique_start_idx in lower_bound; also validate
original_clique_start_idx < kc.entries.size() before calling lower_bound in the
secondary loop and skip adding addtl_cliques when the search range would be
empty.
In `@docs/cuopt/source/cuopt-c/quick-start.rst`:
- Around line 52-55: The conda install lines currently set the wrong CUDA
toolkit value (they use the cuOpt version); update both occurrences of the conda
command that set "cuda-version=26.04.*" to the correct CUDA toolkit version
string (e.g., "cuda-version=12.9" or "cuda-version=13.0") so the two install
lines that reference libcuopt=26.04.* correctly specify the CUDA toolkit via the
cuda-version parameter.
In `@docs/cuopt/source/cuopt-python/quick-start.rst`:
- Around line 44-47: Update the two conda install lines that currently use the
incorrect cuda-version=26.04.*: replace the CUDA 13 command to use
cuda-version=13.1 and the CUDA 12 command to use cuda-version=12.9 so the two
install commands are distinct and match the conda environment files; modify the
existing conda install lines (the ones starting with "conda install -c rapidsai
-c conda-forge -c nvidia cuopt=26.04.* cuda-version=...") to use
cuda-version=13.1 for the CUDA 13 section and cuda-version=12.9 for the CUDA 12
section.
In `@docs/cuopt/source/cuopt-server/quick-start.rst`:
- Around line 32-38: The pip install lines for nightly packages are missing
closing single quotes for the package specifiers (e.g.,
'cuopt-server-cu13==26.4.*' and 'cuopt-sh-client==26.04.*' and the CUDA 12
equivalents). Edit the quick-start.rst snippet to add the missing trailing
single quotes to each package string so each package specifier is wrapped in
matching single quotes (fix both the CUDA 13 and CUDA 12 pip install blocks).
- Around line 16-23: The pip install lines for cuopt-sh-client are missing their
trailing single quotes which will break the commands; update the two occurrences
of 'cuopt-sh-client==26.04.* (one under CUDA 13 and one under CUDA 12) to
include the closing single quote so they read 'cuopt-sh-client==26.04.*'
ensuring both pip install command strings are properly quoted.
🧹 Nitpick comments (2)
cpp/CMakeLists.txt (2)
501-532: Consolidate duplicated include-directory calls forsolve_MIP.You now set include dirs twice for the same target (the later block repeats the private
srcpath). This is redundant and easy to drift—merge into a single call.♻️ Proposed cleanup
- target_include_directories(solve_MIP - PRIVATE - "${CMAKE_CURRENT_SOURCE_DIR}/src" - )
501-506: RemovePUBLICfrom thesolve_MIPinclude directories.
solve_MIPis an executable, and no other targets link to it.PUBLICusage requirements on executables have no effect since executables cannot be linked to by other targets. UsePRIVATEexclusively to keep the configuration clear and consistent with CMake best practices.Additionally, the use of
$<BUILD_INTERFACE:...>is unnecessary for executables—this generator expression is typically used for exported library targets. Consider simplifying to a standard path.🔧 Suggested adjustment
target_include_directories(solve_MIP PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/src" - PUBLIC - "$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>" + "${CMAKE_CURRENT_SOURCE_DIR}/include>" )
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/sparse_matrix.cpp`:
- Around line 578-582: Add a debug-only bounds check at the start of
csr_matrix_t<i_t, f_t>::get_constraint_range to ensure cstr_idx is within [0,
row_start.size()-1] (so that accessing row_start[cstr_idx+1] is safe); use the
project's debug/assert macro (or <cassert> assert) to verify cstr_idx >= 0 and
cstr_idx + 1 < this->row_start.size(), then return the pair as before—this keeps
the fast path in release but catches invalid cstr_idx in debug.
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu`:
- Around line 268-284: The adjacency set returned by get_adj_set_of_var
currently includes the variable itself; after populating adj_set from
var_clique_map_first/first, var_clique_map_addtl/addtl_cliques/first, and
adj_list_small_cliques, remove the self-entry by erasing var_idx (e.g., call
adj_set.erase(var_idx)) before returning so the variable is excluded from its
own adjacency set.
- Around line 350-395: The extend_clique function can add existing clique
members because extension_candidates is filled from
clique_table.get_adj_set_of_var(smallest_degree_var) without filtering; update
extend_clique to skip any candidate already present in the input clique (or
already in new_clique) before sorting/attempting to add—e.g. build a quick
lookup from clique (or check membership when iterating extension_candidates) and
only push non-members into extension_candidates so new_clique never receives
duplicates from extension_candidates; refer to extend_clique,
extension_candidates, new_clique, and clique_table.get_adj_set_of_var to locate
the change.
- Around line 295-318: In clique_table_t::check_adjacency, guard against
self-adjacency and stop using std::binary_search on cliques (they are sorted by
coefficient, not var index); first add an early return if var_idx1 == var_idx2
to avoid treating a variable as adjacent to itself, then replace the
std::binary_search checks over first[clique_idx] and the sliced range
(clique.begin()+addtl.start_pos_on_clique to clique.end()) with an index-based
or iterator linear search (e.g., std::find or std::any_of) so membership is
checked by equality against var_idx2; keep the existing checks of
addtl_cliques[addtl_idx].vertex_idx and the final adj_list_small_cliques lookup
unchanged.
| template <typename i_t, typename f_t> | ||
| std::pair<i_t, i_t> csr_matrix_t<i_t, f_t>::get_constraint_range(i_t cstr_idx) const | ||
| { | ||
| return std::make_pair(this->row_start[cstr_idx], this->row_start[cstr_idx + 1]); | ||
| } |
There was a problem hiding this comment.
Add a debug bounds assert for cstr_idx.
Without a guard, an invalid cstr_idx can read past row_start. A debug assert keeps the fast path while catching misuse early.
🔧 Suggested change (debug-only assert)
template <typename i_t, typename f_t>
std::pair<i_t, i_t> csr_matrix_t<i_t, f_t>::get_constraint_range(i_t cstr_idx) const
{
+ assert(cstr_idx >= 0);
+ assert(static_cast<size_t>(cstr_idx + 1) < this->row_start.size());
return std::make_pair(this->row_start[cstr_idx], this->row_start[cstr_idx + 1]);
}🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/sparse_matrix.cpp` around lines 578 - 582, Add a
debug-only bounds check at the start of csr_matrix_t<i_t,
f_t>::get_constraint_range to ensure cstr_idx is within [0, row_start.size()-1]
(so that accessing row_start[cstr_idx+1] is safe); use the project's
debug/assert macro (or <cassert> assert) to verify cstr_idx >= 0 and cstr_idx +
1 < this->row_start.size(), then return the pair as before—this keeps the fast
path in release but catches invalid cstr_idx in debug.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/cuopt/source/versions1.json`:
- Around line 1-11: The versions1.json entry for version "26.02.00" has a broken
external URL; either remove this "26.02.00" object from versions1.json or update
its "url" value to the correct published path (or a relative path like
"../26.02.00/" if that is the intended location) and don't mark it as present
until the documentation is actually published; locate the JSON object with
"version": "26.02.00" and either delete that object or replace the "url" string
with the verified/correct URL before committing.
🧹 Nitpick comments (2)
cpp/src/mip/presolve/conflict_graph/clique_table.cuh (2)
34-40: Type mismatch in comparison operator.The second
operator<uses a hardcodeddoubletype instead of the template parameterf_t. Iff_tisfloat, this could cause implicit conversions and potential precision inconsistencies during binary search operations (e.g.,std::lower_bound).💡 Consider using f_t for type consistency
template <typename i_t, typename f_t> struct entry_t { i_t col; f_t val; bool operator<(const entry_t& other) const { return val < other.val; } - bool operator<(double other) const { return val < other; } + bool operator<(f_t other) const { return val < other; } };
99-187: Consider moving design notes to separate documentation.This ~90-line block of commented pseudo-code describes future algorithmic extensions. While valuable for design context, embedding this much documentation in a header can clutter the API surface and complicate maintenance.
Consider:
- Moving to a
docs/ordesign/markdown file- Referencing it via a brief comment in the header
- Or keeping only a condensed summary with a link to full documentation
This is a minor stylistic suggestion; the current approach works if the team prefers collocated design notes.
| [ | ||
| { | ||
| "version": "26.02.00", | ||
| "url": "https://docs.nvidia.com/cuopt/user-guide/26.02.00/", | ||
| "version": "26.04.00", | ||
| "url": "../26.04.00/", | ||
| "name": "latest", | ||
| "preferred": true | ||
| }, | ||
| { | ||
| "version": "26.02.00", | ||
| "url": "https://docs.nvidia.com/cuopt/user-guide/26.02.00/" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat docs/cuopt/source/versions1.jsonRepository: NVIDIA/cuopt
Length of output: 847
🏁 Script executed:
curl -s -o /dev/null -w "%{http_code}" "https://docs.nvidia.com/cuopt/user-guide/26.02.00/"Repository: NVIDIA/cuopt
Length of output: 59
Broken documentation link — do not merge.
The URL for version 26.02.00 (https://docs.nvidia.com/cuopt/user-guide/26.02.00/) returns HTTP 404 and is not accessible. This entry cannot be added to the version file until the documentation is published at the correct URL.
Verify:
- Is the 26.02.00 documentation published to a different URL?
- Should this version be added at all, or is it premature?
- Are the other historical version URLs accessible?
🤖 Prompt for AI Agents
In `@docs/cuopt/source/versions1.json` around lines 1 - 11, The versions1.json
entry for version "26.02.00" has a broken external URL; either remove this
"26.02.00" object from versions1.json or update its "url" value to the correct
published path (or a relative path like "../26.02.00/" if that is the intended
location) and don't mark it as present until the documentation is actually
published; locate the JSON object with "version": "26.02.00" and either delete
that object or replace the "url" string with the verified/correct URL before
committing.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu`:
- Around line 402-425: The code currently calls exit(0) when a new_clique
contains a variable and its complement; instead, in the function that
builds/extends cliques (the block using new_clique, clique_table and returning
bool), remove the process-terminating exit(0) and return false to signal the
validation failure to the caller; keep the CUOPT_LOG_DEBUG(...) message (or
adjust it) before returning so the caller can continue and handle the false
result accordingly.
🧹 Nitpick comments (1)
cpp/src/mip/presolve/conflict_graph/clique_table.cu (1)
18-18: Consider wiringDEBUG_KNAPSACK_CONSTRAINTSto a build flag.
Hard-coding it to1keeps debug code compiled in all builds; using an existing config macro keeps release builds lean.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@cpp/src/mip/local_search/local_search.cu`:
- Around line 154-163: The current resize logic for in_fj.cstr_weights can UB
when shrinking because thrust::uninitialized_fill uses begin() + old_size which
may be past end; change the code to only call uninitialized_fill when the new
size is larger than old_size (compute new_size =
solution.problem_ptr->n_constraints and if new_size > old_size then resize and
uninitialized_fill from begin()+old_size to end()), and ensure all device
operations and stream usage (resize with solution.handle_ptr->get_stream(),
thrust::uninitialized_fill with solution.handle_ptr->get_thrust_policy()) are
followed by CUDA error checks (use CUDA_CHECK or the project's equivalent) so
kernel/memory errors are surfaced.
🧹 Nitpick comments (3)
cpp/src/mip/presolve/conflict_graph/clique_table.cuh (1)
51-56: Unused template parameterf_tinaddtl_clique_t.The struct only uses
i_tfor its members. Consider removing the unusedf_tparameter to simplify the template and avoid confusion.-template <typename i_t, typename f_t> +template <typename i_t> struct addtl_clique_t { i_t vertex_idx; i_t clique_idx; i_t start_pos_on_clique; };cpp/src/mip/presolve/conflict_graph/clique_table.cu (2)
18-18: Debug macro should be disabled by default.
DEBUG_KNAPSACK_CONSTRAINTSis set to1, enabling verbose console output unconditionally. This should be0for production builds or controlled by a build configuration flag.-#define DEBUG_KNAPSACK_CONSTRAINTS 1 +#define DEBUG_KNAPSACK_CONSTRAINTS 0
222-242: Consider using remove-erase idiom for O(n) complexity.The current erase-in-loop pattern at lines 238-239 is O(n²) due to element shifting on each erase. For large
addtl_cliquesvectors, consider using the remove-erase idiom similar to the pattern used later forfirstcliques (lines 249-259).♻️ Suggested refactor
+ std::vector<bool> addtl_to_delete(clique_table.addtl_cliques.size(), false); for (size_t addtl_c = 0; addtl_c < clique_table.addtl_cliques.size(); addtl_c++) { const auto& addtl_clique = clique_table.addtl_cliques[addtl_c]; i_t size_of_clique = clique_table.first[addtl_clique.clique_idx].size() - addtl_clique.start_pos_on_clique + 1; if (size_of_clique < clique_table.min_clique_size) { // ... (adjacency list updates remain the same) - clique_table.addtl_cliques.erase(clique_table.addtl_cliques.begin() + addtl_c); - addtl_c--; + addtl_to_delete[addtl_c] = true; num_removed_addtl++; } } + // Remove marked addtl_cliques + size_t write_idx = 0; + for (size_t i = 0; i < clique_table.addtl_cliques.size(); i++) { + if (!addtl_to_delete[i]) { + clique_table.addtl_cliques[write_idx++] = clique_table.addtl_cliques[i]; + } + } + clique_table.addtl_cliques.resize(write_idx);
| // in case this is the first time run, resize | ||
| if (in_fj.cstr_weights.size() != (size_t)solution.problem_ptr->n_constraints) { | ||
| i_t old_size = in_fj.cstr_weights.size(); | ||
| in_fj.cstr_weights.resize(solution.problem_ptr->n_constraints, | ||
| solution.handle_ptr->get_stream()); | ||
| thrust::uninitialized_fill(solution.handle_ptr->get_thrust_policy(), | ||
| in_fj.cstr_weights.begin() + old_size, | ||
| in_fj.cstr_weights.end(), | ||
| 1.); | ||
| } |
There was a problem hiding this comment.
Guard resize for shrinking and add CUDA error checks.
If cstr_weights shrinks, begin() + old_size is past the new end, which is UB. Also, the new device ops should surface CUDA errors explicitly. Suggested fix:
Proposed fix
- if (in_fj.cstr_weights.size() != (size_t)solution.problem_ptr->n_constraints) {
- i_t old_size = in_fj.cstr_weights.size();
- in_fj.cstr_weights.resize(solution.problem_ptr->n_constraints,
- solution.handle_ptr->get_stream());
- thrust::uninitialized_fill(solution.handle_ptr->get_thrust_policy(),
- in_fj.cstr_weights.begin() + old_size,
- in_fj.cstr_weights.end(),
- 1.);
- }
+ auto new_size = static_cast<size_t>(solution.problem_ptr->n_constraints);
+ auto old_size = in_fj.cstr_weights.size();
+ if (old_size != new_size) {
+ in_fj.cstr_weights.resize(new_size, solution.handle_ptr->get_stream());
+ if (old_size < new_size) {
+ thrust::uninitialized_fill(solution.handle_ptr->get_thrust_policy(),
+ in_fj.cstr_weights.begin() + old_size,
+ in_fj.cstr_weights.end(),
+ 1.);
+ }
+ // Use your project’s CUDA_CHECK / equivalent macro here
+ CUOPT_CUDA_CHECK(cudaGetLastError());
+ }As per coding guidelines: Every CUDA kernel launch and memory operation must have error checking with CUDA_CHECK or equivalent verification.
🤖 Prompt for AI Agents
In `@cpp/src/mip/local_search/local_search.cu` around lines 154 - 163, The current
resize logic for in_fj.cstr_weights can UB when shrinking because
thrust::uninitialized_fill uses begin() + old_size which may be past end; change
the code to only call uninitialized_fill when the new size is larger than
old_size (compute new_size = solution.problem_ptr->n_constraints and if new_size
> old_size then resize and uninitialized_fill from begin()+old_size to end()),
and ensure all device operations and stream usage (resize with
solution.handle_ptr->get_stream(), thrust::uninitialized_fill with
solution.handle_ptr->get_thrust_policy()) are followed by CUDA error checks (use
CUDA_CHECK or the project's equivalent) so kernel/memory errors are surfaced.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu`:
- Around line 129-200: The synthetic row index assignment in
fill_knapsack_constraints (knapsack_constraint2.cstr_idx = A.m +
added_constraints) can collide with real rows added later (e.g., by
insert_clique_into_problem) and cause remove_dominated_cliques to delete the
wrong row; change the logic to mark derived/virtual constraints instead of
inventing a real index (set cstr_idx to a sentinel like -1 or add a bool flag
e.g., is_virtual on knapsack_constraint_t), ensure
knapsack_constraints.push_back behavior preserves that flag for both
knapsack_constraint and knapsack_constraint2, and update
remove_dominated_cliques and any code that removes rows to skip virtual
constraints (and validate index mappings when converting between
original/presolved/folded problems).
- Around line 22-29: The file clique_table.cu uses std::cout in its debug print
routines (the print_* functions) guarded by DEBUG_KNAPSACK_CONSTRAINTS but lacks
the <iostream> header; add the include for <iostream> at the top of
cpp/src/mip/presolve/conflict_graph/clique_table.cu so the debug prints compile,
and ensure the same include is added wherever the same print_* debug code
appears (the other block around the print_* implementations).
- Around line 206-274: remove_small_cliques can leave addtl_cliques that
reference a base clique marked for deletion (to_delete) which causes
index_mapping lookup to be -1; fix by scanning clique_table.addtl_cliques after
you mark to_delete (and before building index_mapping) and remove any
addtl_cliques whose addtl_clique.clique_idx refers to a base clique with
to_delete==true (erase those entries and increment num_removed_addtl
accordingly), so that only addtl_cliques with surviving base cliques remain when
you build index_mapping and renumber them; reference functions/fields:
remove_small_cliques, clique_table.first, clique_table.addtl_cliques,
addtl_clique.clique_idx, to_delete, index_mapping, min_clique_size.
| #include <algorithm> | ||
| #include <dual_simplex/sparse_matrix.hpp> | ||
| #include <limits> | ||
| #include <mip/mip_constants.hpp> | ||
| #include <mip/utils.cuh> | ||
| #include <utilities/logger.hpp> | ||
| #include <utilities/macros.cuh> | ||
| #include <utilities/timer.hpp> |
There was a problem hiding this comment.
Add <iostream> for DEBUG_KNAPSACK_CONSTRAINTS prints.
print_* uses std::cout, but the header isn’t included here, which can break compilation.
🔧 Proposed fix
`#include` "clique_table.cuh"
`#include` <algorithm>
`#include` <dual_simplex/sparse_matrix.hpp>
+#include <iostream>
`#include` <limits>
`#include` <mip/mip_constants.hpp>Also applies to: 694-731
🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu` around lines 22 - 29,
The file clique_table.cu uses std::cout in its debug print routines (the print_*
functions) guarded by DEBUG_KNAPSACK_CONSTRAINTS but lacks the <iostream>
header; add the include for <iostream> at the top of
cpp/src/mip/presolve/conflict_graph/clique_table.cu so the debug prints compile,
and ensure the same include is added wherever the same print_* debug code
appears (the other block around the print_* implementations).
| // convert all the knapsack constraints | ||
| // if a binary variable has a negative coefficient, put its negation in the constraint | ||
| template <typename i_t, typename f_t> | ||
| void fill_knapsack_constraints(const dual_simplex::user_problem_t<i_t, f_t>& problem, | ||
| std::vector<knapsack_constraint_t<i_t, f_t>>& knapsack_constraints, | ||
| dual_simplex::csr_matrix_t<i_t, f_t>& A) | ||
| { | ||
| // we might add additional constraints for the equality constraints | ||
| i_t added_constraints = 0; | ||
| // in user problems, ranged constraint ids monotonically increase. | ||
| // when a row sense is "E", check if it is ranged constraint and treat accordingly | ||
| i_t ranged_constraint_counter = 0; | ||
| for (i_t i = 0; i < A.m; i++) { | ||
| std::pair<i_t, i_t> constraint_range = A.get_constraint_range(i); | ||
| if (constraint_range.second - constraint_range.first < 2) { | ||
| CUOPT_LOG_DEBUG("Constraint %d has less than 2 variables, skipping", i); | ||
| continue; | ||
| } | ||
| bool all_binary = true; | ||
| // check if all variables are binary (any non-continuous with bounds [0,1]) | ||
| for (i_t j = constraint_range.first; j < constraint_range.second; j++) { | ||
| if (problem.var_types[A.j[j]] == dual_simplex::variable_type_t::CONTINUOUS || | ||
| problem.lower[A.j[j]] != 0 || problem.upper[A.j[j]] != 1) { | ||
| all_binary = false; | ||
| break; | ||
| } | ||
| } | ||
| // if all variables are binary, convert the constraint to a knapsack constraint | ||
| if (!all_binary) { continue; } | ||
| knapsack_constraint_t<i_t, f_t> knapsack_constraint; | ||
|
|
||
| knapsack_constraint.cstr_idx = i; | ||
| if (problem.row_sense[i] == 'L') { | ||
| knapsack_constraint.rhs = problem.rhs[i]; | ||
| for (i_t j = constraint_range.first; j < constraint_range.second; j++) { | ||
| knapsack_constraint.entries.push_back({A.j[j], A.x[j]}); | ||
| } | ||
| } else if (problem.row_sense[i] == 'G') { | ||
| knapsack_constraint.rhs = -problem.rhs[i]; | ||
| for (i_t j = constraint_range.first; j < constraint_range.second; j++) { | ||
| knapsack_constraint.entries.push_back({A.j[j], -A.x[j]}); | ||
| } | ||
| } | ||
| // equality part | ||
| else { | ||
| bool is_set_partitioning = problem.rhs[i] == 1.; | ||
| bool ranged_constraint = ranged_constraint_counter < problem.num_range_rows && | ||
| problem.range_rows[ranged_constraint_counter] == i; | ||
| // less than part | ||
| knapsack_constraint.rhs = problem.rhs[i]; | ||
| if (ranged_constraint) { | ||
| knapsack_constraint.rhs += problem.range_value[ranged_constraint_counter]; | ||
| is_set_partitioning = | ||
| problem.range_value[ranged_constraint_counter] == 0. && problem.rhs[i] == 1.; | ||
| ranged_constraint_counter++; | ||
| } | ||
| for (i_t j = constraint_range.first; j < constraint_range.second; j++) { | ||
| knapsack_constraint.entries.push_back({A.j[j], A.x[j]}); | ||
| } | ||
| // greater than part: convert it to less than | ||
| knapsack_constraint_t<i_t, f_t> knapsack_constraint2; | ||
| knapsack_constraint2.cstr_idx = A.m + added_constraints++; | ||
| knapsack_constraint2.rhs = -problem.rhs[i]; | ||
| for (i_t j = constraint_range.first; j < constraint_range.second; j++) { | ||
| knapsack_constraint2.entries.push_back({A.j[j], -A.x[j]}); | ||
| } | ||
| knapsack_constraint.is_set_partitioning = is_set_partitioning; | ||
| knapsack_constraint2.is_set_partitioning = is_set_partitioning; | ||
| knapsack_constraints.push_back(knapsack_constraint2); | ||
| } | ||
| knapsack_constraints.push_back(knapsack_constraint); | ||
| } |
There was a problem hiding this comment.
Avoid virtual row indices colliding with newly inserted rows.
knapsack_constraint2.cstr_idx = A.m + added_constraints creates synthetic indices that can overlap with rows later appended by insert_clique_into_problem, so remove_dominated_cliques can delete the wrong row. Mark derived constraints as virtual (e.g., cstr_idx = -1 or a dedicated flag) and skip row removals for them.
🔧 Proposed fix (sentinel index + guard)
// equality part
...
- knapsack_constraint2.cstr_idx = A.m + added_constraints++;
+ knapsack_constraint2.cstr_idx = -1; // virtual constraint (no real row)
knapsack_constraint2.rhs = -problem.rhs[i];
for (i_t j = constraint_range.first; j < constraint_range.second; j++) {
knapsack_constraint2.entries.push_back({A.j[j], -A.x[j]});
} for (size_t idx = start; idx < end; idx++) {
const auto& sp = sp_sigs[idx];
+ if (sp.row_idx < 0) { continue; } // skip virtual constraints
if (sp.row_idx >= 0 && sp.row_idx < static_cast<i_t>(removal_marker.size()) &&
removal_marker[sp.row_idx]) {
continue;
}Also applies to: 512-692
🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu` around lines 129 - 200,
The synthetic row index assignment in fill_knapsack_constraints
(knapsack_constraint2.cstr_idx = A.m + added_constraints) can collide with real
rows added later (e.g., by insert_clique_into_problem) and cause
remove_dominated_cliques to delete the wrong row; change the logic to mark
derived/virtual constraints instead of inventing a real index (set cstr_idx to a
sentinel like -1 or add a bool flag e.g., is_virtual on knapsack_constraint_t),
ensure knapsack_constraints.push_back behavior preserves that flag for both
knapsack_constraint and knapsack_constraint2, and update
remove_dominated_cliques and any code that removes rows to skip virtual
constraints (and validate index mappings when converting between
original/presolved/folded problems).
| template <typename i_t, typename f_t> | ||
| void remove_small_cliques(clique_table_t<i_t, f_t>& clique_table) | ||
| { | ||
| i_t num_removed_first = 0; | ||
| i_t num_removed_addtl = 0; | ||
| std::vector<bool> to_delete(clique_table.first.size(), false); | ||
| // if a clique is small, we remove it from the cliques and add it to adjlist | ||
| for (size_t clique_idx = 0; clique_idx < clique_table.first.size(); clique_idx++) { | ||
| const auto& clique = clique_table.first[clique_idx]; | ||
| if (clique.size() <= (size_t)clique_table.min_clique_size) { | ||
| for (size_t i = 0; i < clique.size(); i++) { | ||
| for (size_t j = 0; j < clique.size(); j++) { | ||
| if (i == j) { continue; } | ||
| clique_table.adj_list_small_cliques[clique[i]].insert(clique[j]); | ||
| } | ||
| } | ||
| num_removed_first++; | ||
| to_delete[clique_idx] = true; | ||
| } | ||
| } | ||
| for (size_t addtl_c = 0; addtl_c < clique_table.addtl_cliques.size(); addtl_c++) { | ||
| const auto& addtl_clique = clique_table.addtl_cliques[addtl_c]; | ||
| i_t size_of_clique = | ||
| clique_table.first[addtl_clique.clique_idx].size() - addtl_clique.start_pos_on_clique + 1; | ||
| if (size_of_clique < clique_table.min_clique_size) { | ||
| // the items from first clique are already added to the adjlist | ||
| // only add the items that are coming from the new var in the additional clique | ||
| for (size_t i = addtl_clique.start_pos_on_clique; | ||
| i < clique_table.first[addtl_clique.clique_idx].size(); | ||
| i++) { | ||
| // insert conflicts both way | ||
| clique_table.adj_list_small_cliques[clique_table.first[addtl_clique.clique_idx][i]].insert( | ||
| addtl_clique.vertex_idx); | ||
| clique_table.adj_list_small_cliques[addtl_clique.vertex_idx].insert( | ||
| clique_table.first[addtl_clique.clique_idx][i]); | ||
| } | ||
| clique_table.addtl_cliques.erase(clique_table.addtl_cliques.begin() + addtl_c); | ||
| addtl_c--; | ||
| num_removed_addtl++; | ||
| } | ||
| } | ||
| CUOPT_LOG_DEBUG("Number of removed cliques from first: %d, additional: %d", | ||
| num_removed_first, | ||
| num_removed_addtl); | ||
| size_t i = 0; | ||
| size_t old_idx = 0; | ||
| std::vector<i_t> index_mapping(clique_table.first.size(), -1); | ||
| auto it = std::remove_if(clique_table.first.begin(), clique_table.first.end(), [&](auto& clique) { | ||
| bool res = false; | ||
| if (to_delete[old_idx]) { | ||
| res = true; | ||
| } else { | ||
| index_mapping[old_idx] = i++; | ||
| } | ||
| old_idx++; | ||
| return res; | ||
| }); | ||
| clique_table.first.erase(it, clique_table.first.end()); | ||
| // renumber the reference indices in the additional cliques, since we removed some cliques | ||
| for (size_t addtl_c = 0; addtl_c < clique_table.addtl_cliques.size(); addtl_c++) { | ||
| i_t new_clique_idx = index_mapping[clique_table.addtl_cliques[addtl_c].clique_idx]; | ||
| cuopt_assert(new_clique_idx != -1, "New clique index is -1"); | ||
| clique_table.addtl_cliques[addtl_c].clique_idx = new_clique_idx; | ||
| cuopt_assert(clique_table.first[new_clique_idx].size() - | ||
| clique_table.addtl_cliques[addtl_c].start_pos_on_clique + 1 >= | ||
| (size_t)clique_table.min_clique_size, | ||
| "A small clique remained after removing small cliques"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Remove addtl cliques whose base clique was deleted.
Base cliques are removed with <= min_clique_size, but addtl cliques are only removed with < min_clique_size, so an addtl clique can survive while its base clique is deleted, triggering new_clique_idx == -1.
🔧 Proposed fix
for (size_t addtl_c = 0; addtl_c < clique_table.addtl_cliques.size(); addtl_c++) {
const auto& addtl_clique = clique_table.addtl_cliques[addtl_c];
+ if (to_delete[addtl_clique.clique_idx]) {
+ clique_table.addtl_cliques.erase(clique_table.addtl_cliques.begin() + addtl_c);
+ addtl_c--;
+ num_removed_addtl++;
+ continue;
+ }
i_t size_of_clique =
clique_table.first[addtl_clique.clique_idx].size() - addtl_clique.start_pos_on_clique + 1;
- if (size_of_clique < clique_table.min_clique_size) {
+ if (size_of_clique < clique_table.min_clique_size) {🤖 Prompt for AI Agents
In `@cpp/src/mip/presolve/conflict_graph/clique_table.cu` around lines 206 - 274,
remove_small_cliques can leave addtl_cliques that reference a base clique marked
for deletion (to_delete) which causes index_mapping lookup to be -1; fix by
scanning clique_table.addtl_cliques after you mark to_delete (and before
building index_mapping) and remove any addtl_cliques whose
addtl_clique.clique_idx refers to a base clique with to_delete==true (erase
those entries and increment num_removed_addtl accordingly), so that only
addtl_cliques with surviving base cliques remain when you build index_mapping
and renumber them; reference functions/fields: remove_small_cliques,
clique_table.first, clique_table.addtl_cliques, addtl_clique.clique_idx,
to_delete, index_mapping, min_clique_size.
This PR implements the first part of the paper from the paper: Preprocessing and Cutting Planes with Conflict Graphs.
This part contains only the preprocessing parts and clique cuts will follow in a separate PR:
The data structures and query functions are implemented and will be used as a basis for clique cuts and usage in heuristics.
Pending: benchmark results/comparison
Summary by CodeRabbit
New Features
Improvements
Chores