CWE-407: 12 O(N²) algorithmic complexity fixes — HashSet/HashMap shadow indexes#1237
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughRefactors numerous membership/membership-check paths from linear Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…ow indexes Fixes 12 algorithmic complexity defects (CWE-407) across Redot Engine. All defects use Vector::has() or Vector::find() as membership/lookup tests inside hot loops, giving O(N²) cost. Each fix adds a HashSet or HashMap shadow index for O(1) lookups while preserving the original Vector. redot-0001 scene/main/scene_tree.cpp Group::nodes.has(p_node) — O(n) per add_to_group(), fires every frame. Fix: HashSet<Node*> node_set shadow in struct Group. 1,000x at n=2,000. redot-0002 modules/godot_physics_2d/godot_body_2d.h areas.find(AreaCMP(p_area)) — O(n) per physics tick in 2D area overlap. Fix: HashMap<RID,int> area_index shadow. 50x at 500 bodies x 200 areas. redot-0003 modules/godot_physics_3d/godot_body_3d.h Same as redot-0002, 3D physics variant. 50x speedup. redot-0004 modules/godot_physics_3d/godot_soft_body_3d.cpp node_links[ia].has(ib) — O(n) per link in bending constraint generation. Fix: LocalVector<HashSet<int>> node_link_set shadow per node. redot-0005 core/math/a_star.cpp + a_star_grid_2d.cpp open_list.find(e) — O(N) per neighbor relaxation in A* decrease-key path. Fix: open_index field on Point; maintained on push/pop. O(N²) -> O(N log N). Affects AStar3D, AStar2D, AStarGrid2D. redot-0006 scene/3d/skeleton_3d.cpp child_bones.has(i) — O(B) per bone in _update_process_order(). Fix: child_bones changed from Vector<int> to HashSet<int>. redot-0007 editor/import/3d/post_import_plugin_skeleton_rest_fixer.cpp Two Vector<int>-as-sets: bones_to_process and keep_bone_rest. Both call .has() inside track loops. Fix: both to HashSet<int>. redot-0008 modules/gltf/gltf_state.h + gltf_document.cpp + gltf_state.cpp extensions_used — Vector<String>.has() O(E) per node/animation/track. Fix: HashSet<String>. insert() is idempotent O(1). All callers updated. redot-0009 scene/resources/font.cpp Font::_is_cyclic() — no visited set, O(F^D) on diamond fallback graphs. Fix: HashSet<const Font*> visited threaded via _is_cyclic_internal(). redot-0010 scene/resources/font.cpp Font::_update_rids_fb() — no visited set, O(N²) diamond re-traversal. Fix: HashSet<const Font*>* r_visited param. FontVariation + SystemFont fixed. redot-0011 scene/gui/graph_edit_arranger.cpp ORDER and PRED macros use Vector::find() — O(N) per connection. Fix: pre-build HashMap<StringName,int> node_order and HashMap<StringName,StringName> predecessor_map. O(C*N) -> O(N+C). redot-0012 scene/3d/spring_bone_simulator_3d.cpp collisions.has(id) and collisions.find(id) — O(N) inside S*C loops per tick. Fix: HashSet<ObjectID> collision_set + HashMap<ObjectID,int> collision_index_map. O(S*C*N) -> O(N + S*C) per tick. All fixes use Redot's own HashSet<T> and HashMap<K,V> from core/templates/. No new dependencies. Vectors preserved where ordered iteration is needed. Discovered by undefect.com — CWE-407 sedimentary defect campaign. Contact: security@undefect.com
1c93eb8 to
3882203
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scene/3d/spring_bone_simulator_3d.cpp (1)
1480-1490:⚠️ Potential issue | 🟠 MajorDeny-list filtering is order-sensitive here.
masksis consumed as if its indices are already sorted by child-collision order, but these indices are appended insettings[i]->exclude_collisionsorder. If the exclude list is[C, A]while the child order is[A, B, C],Ais never skipped. Prefer tracking excludedObjectIDs in aHashSetand filteringcollisionsagainst that set.💡 Possible fix
- LocalVector<uint32_t> masks; + HashSet<ObjectID> excluded_collision_set; Vector<NodePath> &setting_exclude_collisions = settings[i]->exclude_collisions; for (int j = 0; j < setting_exclude_collisions.size(); j++) { Node *n = get_node_or_null(setting_exclude_collisions[j]); if (!n) { continue; } ObjectID id = n->get_instance_id(); - // CWE-407 fix (redot-0012): O(1) via HashMap instead of O(N) LocalVector::find. - const int *find_ptr = collision_index_map.getptr(id); - if (!find_ptr) { + if (!collision_set.has(id)) { setting_exclude_collisions.write[j] = NodePath(); // Clear path if not found. } else { - masks.push_back((uint32_t)*find_ptr); + excluded_collision_set.insert(id); } } - uint32_t mask_index = 0; for (uint32_t j = 0; j < collisions.size(); j++) { - if (mask_index < masks.size() && j == masks[mask_index]) { - mask_index++; + if (excluded_collision_set.has(collisions[j])) { continue; } cache.push_back(collisions[j]); }If you take this route,
collision_index_mapabove can be removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/3d/spring_bone_simulator_3d.cpp` around lines 1480 - 1490, The current deny-list logic builds a sorted-by-exclude-list "masks" vector and then assumes it aligns with the child-order `collisions` vector (using `masks`, `mask_index`, and `collision_index_map`), which is order-sensitive and incorrect; instead, build an unordered set (HashSet) of excluded ObjectIDs from `setting_exclude_collisions` (the entries written into `setting_exclude_collisions.write[...]`), then iterate the existing `collisions` vector and skip any collision whose ObjectID is present in that set; remove use of `masks`, `mask_index`, and the `collision_index_map` lookup logic and replace the conditional that checks `j == masks[mask_index]` with a simple membership test against the HashSet when filtering `collisions`.
🧹 Nitpick comments (5)
scene/gui/graph_edit_arranger.cpp (2)
525-531: Avoid rebuildingpredecessor_mapon each recursive_place_block()invocation.
predecessor_mapis recomputed per call (Line 525), including recursive calls (Line 546). In dense/layered graphs this adds repeated O(N) overhead and can keep the overall path near quadratic.⚙️ Refactor sketch
-void GraphEditArranger::_place_block(..., HashMap<StringName, Vector2> &r_node_positions) { +void GraphEditArranger::_place_block(..., HashMap<StringName, Vector2> &r_node_positions, const HashMap<StringName, StringName> &r_predecessor_map) { ... - HashMap<StringName, StringName> predecessor_map; - for (unsigned int i = 0; i < r_layers.size(); i++) { - for (int j = 1; j < r_layers[i].size(); j++) { - predecessor_map[r_layers[i][j]] = r_layers[i][j - 1]; - } - } ... - PRED(w, predecessor_map); + PRED(w, r_predecessor_map); ... - _place_block(u, ..., r_node_positions); + _place_block(u, ..., r_node_positions, r_predecessor_map);Build
r_predecessor_maponce in the caller and pass it through recursion.Also applies to: 546-546
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/gui/graph_edit_arranger.cpp` around lines 525 - 531, predecessor_map is being rebuilt inside _place_block() on each recursive call causing repeated O(N) work; instead create the map once in the initial caller (e.g. build r_predecessor_map from r_layers) and add it as a parameter to _place_block() (and its recursive calls) so subsequent invocations reuse the same HashMap<StringName,StringName> rather than recomputing predecessor_map from r_layers each time; update function signature and all call sites to accept the prebuilt map and remove the local reconstruction.
442-449: Buildnode_orderonce per arrange pass, not per_calculate_threshold()call.
node_orderis reconstructed every call (Line 442), but_calculate_threshold()is invoked repeatedly from_place_block()(Line 547, Line 568). This reintroduces avoidable O(N) work in a hot path and dilutes the intended complexity fix.⚙️ Refactor sketch
-float GraphEditArranger::_calculate_threshold(..., const HashMap<StringName, Vector2> &r_node_positions) { +float GraphEditArranger::_calculate_threshold(..., const HashMap<StringName, Vector2> &r_node_positions, const HashMap<StringName, int> &r_node_order) { ... - HashMap<StringName, int> node_order; - for (unsigned int i = 0; i < r_layers.size(); i++) { - for (int j = 1; j < r_layers[i].size(); j++) { - node_order[r_layers[i][j]] = j; - } - } ... - ORDER(connection->from_node, node_order); + ORDER(connection->from_node, r_node_order); ... - ORDER(connection->to_node, node_order); + ORDER(connection->to_node, r_node_order);void GraphEditArranger::_place_block(...) { + HashMap<StringName, int> node_order; + for (unsigned int i = 0; i < r_layers.size(); i++) { + for (int j = 1; j < r_layers[i].size(); j++) { + node_order[r_layers[i][j]] = j; + } + } ... - threshold = _calculate_threshold(..., r_node_positions); + threshold = _calculate_threshold(..., r_node_positions, node_order);Also applies to: 547-547, 568-568
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/gui/graph_edit_arranger.cpp` around lines 442 - 449, The node_order HashMap is being rebuilt inside the hot path because _calculate_threshold() is called repeatedly from _place_block(); move the construction of node_order (currently built from r_layers) out of the repeated calls so it is created once per arrange pass and reused by _calculate_threshold(). Concretely, build node_order once before the loop that calls _place_block() (or compute it in the arrange entry point), then change _calculate_threshold() to accept a const reference or access the cached node_order instead of reconstructing it; ensure the symbols involved are r_layers, node_order, _calculate_threshold(), and _place_block() so callers pass the precomputed map.modules/godot_physics_3d/godot_body_3d.h (1)
119-119: Good RID lookup change; centralize the rebuild path.Using
get_self()as the shadow-index key preserves the old identity semantics, but the clear-and-repopulate loop is now duplicated in both structural mutation paths. A small private helper would keep theareas/area_indexinvariant in one place and make future edits safer.Also applies to: 161-172, 176-188
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/godot_physics_3d/godot_body_3d.h` at line 119, The duplicated clear-and-repopulate logic that maintains the areas and area_index invariants should be centralized into a single private helper (e.g., rebuild_area_index or update_areas_index) that takes no args, clears area_index, iterates the areas container and repopulates area_index using get_self() as the key; replace the duplicated loops in all structural mutation paths (the blocks that modify areas and currently re-populate area_index) with calls to this new helper so the areas/area_index invariant is maintained in one place.modules/godot_physics_2d/godot_body_2d.h (1)
125-125: Good RID lookup change; keep the invariant behind one helper.The behavior change looks right, but the full
area_indexrebuild now lives in bothadd_area()andremove_area(). Pulling that into a shared helper would make theareas/area_indexsync rule harder to miss as this code evolves.Also applies to: 167-179, 183-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/godot_physics_2d/godot_body_2d.h` at line 125, The code duplicates the full area_index rebuild logic in add_area() and remove_area(), risking divergence between areas and area_index; extract that logic into a single helper (e.g., rebuild_area_index() or sync_area_index_with_areas()) and call it from add_area(), remove_area(), and any other mutators so the invariant between the areas container and the HashMap<RID,int> area_index is maintained centrally; update calls to reference the new helper and remove the duplicate rebuild code blocks in add_area() and remove_area().modules/gltf/gltf_state.cpp (1)
149-153: Consider applying the same HashSet optimization toextensions_required.For consistency,
extensions_requiredstill uses the O(n)has() + push_back()pattern. If the number of required extensions can grow large, this could become a bottleneck. However, if the count is typically small (a handful of extensions), the current implementation is acceptable.♻️ Optional: Apply same pattern to extensions_required
In
gltf_state.h:- Vector<String> extensions_required; + HashSet<String> extensions_required;In
gltf_state.cpp:if (p_required) { - if (!extensions_required.has(p_extension_name)) { - extensions_required.push_back(p_extension_name); - } + extensions_required.insert(p_extension_name); }Note: This would also require updating serialization code in
gltf_document.cppto build a sortedVector<String>from the set, similar to howextensions_usedis handled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/gltf/gltf_state.cpp` around lines 149 - 153, The current push/check pattern for extensions_required in gltf_state.cpp (the if (p_required) block that checks extensions_required.has(...) then push_back(p_extension_name)) should be converted to use the same HashSet/Set optimization used for extensions_used: add a Set<String> (or HashSet) member for extensions_required in gltf_state.h, update the code paths that insert into extensions_required (referencing p_required and p_extension_name) to insert into the set instead of doing has()+push_back(), and update gltf_document.cpp serialization to build a sorted Vector<String> from that set when emitting the JSON (mirroring how extensions_used is handled) so existing serialization order/format remains correct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/math/a_star_grid_2d.cpp`:
- Line 591: AStarGrid2D::_solve is calling sorter.push_heap with e->open_index
for existing-open nodes but never maintains that field when nodes are inserted
or removed from open_list; update the code so that whenever a node is added to
open_list you set e->open_index to its heap index (after push_heap returns) and
whenever a node is removed from open_list (pop/erase path around lines ~548–549)
you reset e->open_index to -1; also ensure any code paths that call
sorter.push_heap or sorter.pop_heap update the affected entries' open_index
consistently so the heap index stored in the node (open_index) always reflects
its position in open_list.
In `@core/math/a_star.cpp`:
- Line 345: Stored heap indices (Point::open_index) become stale because
SortArray::push_heap reorders elements without updating Point::open_index;
replace uses of the stored index when reheapifying with a fresh lookup. For
every reheapify call (locations referencing e->open_index or p->open_index in
functions manipulating open_list) call open_list.find(e) (or open_list.find(p))
to get the current index and pass that to SortArray::adjust_heap / reheapify
routine instead of using the cached Point::open_index; also remove or set stale
assignments like p->open_index = -1 only when the element is truly removed from
the heap. Ensure you update all occurrences mentioned (the reheapify sites
around the open_list push/update logic).
In `@modules/gltf/gltf_document.cpp`:
- Around line 449-452: Replace the manual
p_state->extensions_used.insert("KHR_node_visibility") plus conditional
p_state->extensions_required.push_back(...) with a single call to
p_state->add_used_extension("KHR_node_visibility") so that GLTFState handles
de-duplication and keeps extensions_used/extensions_required in sync (this
affects the code paths using _visibility_mode ==
VISIBILITY_MODE_INCLUDE_REQUIRED where duplicate KHR_node_visibility entries are
currently produced).
In `@scene/3d/skeleton_3d.h`:
- Line 105: The change of Bone::child_bones to HashSet<int> breaks indexed
iteration and APIs expecting Vector<int>; restore Bone::child_bones as
Vector<int> and add a complementary HashSet<int> (e.g., child_bones_set or
child_bones_hash) inside class Bone to provide O(1) membership checks, update
all mutators that add/remove children to maintain both containers in sync,
replace any O(B).has() loops with checks against the new HashSet (use
child_bones_set.has(id)), and ensure any function that returns the child list
(the method that previously returned Vector<int>) still returns the Vector<int>
member so existing callers and indexed access (child_bones[i]) continue to work.
---
Outside diff comments:
In `@scene/3d/spring_bone_simulator_3d.cpp`:
- Around line 1480-1490: The current deny-list logic builds a
sorted-by-exclude-list "masks" vector and then assumes it aligns with the
child-order `collisions` vector (using `masks`, `mask_index`, and
`collision_index_map`), which is order-sensitive and incorrect; instead, build
an unordered set (HashSet) of excluded ObjectIDs from
`setting_exclude_collisions` (the entries written into
`setting_exclude_collisions.write[...]`), then iterate the existing `collisions`
vector and skip any collision whose ObjectID is present in that set; remove use
of `masks`, `mask_index`, and the `collision_index_map` lookup logic and replace
the conditional that checks `j == masks[mask_index]` with a simple membership
test against the HashSet when filtering `collisions`.
---
Nitpick comments:
In `@modules/gltf/gltf_state.cpp`:
- Around line 149-153: The current push/check pattern for extensions_required in
gltf_state.cpp (the if (p_required) block that checks
extensions_required.has(...) then push_back(p_extension_name)) should be
converted to use the same HashSet/Set optimization used for extensions_used: add
a Set<String> (or HashSet) member for extensions_required in gltf_state.h,
update the code paths that insert into extensions_required (referencing
p_required and p_extension_name) to insert into the set instead of doing
has()+push_back(), and update gltf_document.cpp serialization to build a sorted
Vector<String> from that set when emitting the JSON (mirroring how
extensions_used is handled) so existing serialization order/format remains
correct.
In `@modules/godot_physics_2d/godot_body_2d.h`:
- Line 125: The code duplicates the full area_index rebuild logic in add_area()
and remove_area(), risking divergence between areas and area_index; extract that
logic into a single helper (e.g., rebuild_area_index() or
sync_area_index_with_areas()) and call it from add_area(), remove_area(), and
any other mutators so the invariant between the areas container and the
HashMap<RID,int> area_index is maintained centrally; update calls to reference
the new helper and remove the duplicate rebuild code blocks in add_area() and
remove_area().
In `@modules/godot_physics_3d/godot_body_3d.h`:
- Line 119: The duplicated clear-and-repopulate logic that maintains the areas
and area_index invariants should be centralized into a single private helper
(e.g., rebuild_area_index or update_areas_index) that takes no args, clears
area_index, iterates the areas container and repopulates area_index using
get_self() as the key; replace the duplicated loops in all structural mutation
paths (the blocks that modify areas and currently re-populate area_index) with
calls to this new helper so the areas/area_index invariant is maintained in one
place.
In `@scene/gui/graph_edit_arranger.cpp`:
- Around line 525-531: predecessor_map is being rebuilt inside _place_block() on
each recursive call causing repeated O(N) work; instead create the map once in
the initial caller (e.g. build r_predecessor_map from r_layers) and add it as a
parameter to _place_block() (and its recursive calls) so subsequent invocations
reuse the same HashMap<StringName,StringName> rather than recomputing
predecessor_map from r_layers each time; update function signature and all call
sites to accept the prebuilt map and remove the local reconstruction.
- Around line 442-449: The node_order HashMap is being rebuilt inside the hot
path because _calculate_threshold() is called repeatedly from _place_block();
move the construction of node_order (currently built from r_layers) out of the
repeated calls so it is created once per arrange pass and reused by
_calculate_threshold(). Concretely, build node_order once before the loop that
calls _place_block() (or compute it in the arrange entry point), then change
_calculate_threshold() to accept a const reference or access the cached
node_order instead of reconstructing it; ensure the symbols involved are
r_layers, node_order, _calculate_threshold(), and _place_block() so callers pass
the precomputed map.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 67e26216-0ce7-4981-93c8-ce6fb258bfe3
📒 Files selected for processing (19)
core/math/a_star.cppcore/math/a_star.hcore/math/a_star_grid_2d.cppcore/math/a_star_grid_2d.heditor/import/3d/post_import_plugin_skeleton_rest_fixer.cppmodules/gltf/gltf_document.cppmodules/gltf/gltf_state.cppmodules/gltf/gltf_state.hmodules/godot_physics_2d/godot_body_2d.hmodules/godot_physics_3d/godot_body_3d.hmodules/godot_physics_3d/godot_soft_body_3d.cppscene/3d/skeleton_3d.cppscene/3d/skeleton_3d.hscene/3d/spring_bone_simulator_3d.cppscene/gui/graph_edit_arranger.cppscene/main/scene_tree.cppscene/main/scene_tree.hscene/resources/font.cppscene/resources/font.h
core/math/a_star_grid_2d.cpp
Outdated
| sorter.push_heap(0, open_list.size() - 1, 0, e, open_list.ptr()); | ||
| } else { | ||
| sorter.push_heap(0, open_list.find(e), 0, e, open_list.ptr()); | ||
| sorter.push_heap(0, e->open_index, 0, e, open_list.ptr()); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify open_index is read but not written in AStarGrid2D::_solve.
rg -n '\bopen_index\b' core/math/a_star_grid_2d.cpp core/math/a_star_grid_2d.h
# Inspect heap helper behavior used by push_heap/pop_heap.
sed -n '80,170p' core/templates/sort_array.hRepository: Redot-Engine/redot-engine
Length of output: 3083
🏁 Script executed:
#!/bin/bash
# Get full context of _solve method, especially around line 591
wc -l core/math/a_star_grid_2d.cpp
sed -n '550,620p' core/math/a_star_grid_2d.cpp
# Search for any assignment to open_index in the cpp file
rg -n 'open_index\s*=' core/math/a_star_grid_2d.cppRepository: Redot-Engine/redot-engine
Length of output: 1996
🏁 Script executed:
#!/bin/bash
# Search for pop_heap and remove operations in _solve context
sed -n '500,650p' core/math/a_star_grid_2d.cpp | grep -n -E '(pop_heap|remove_at|erase)'Repository: Redot-Engine/redot-engine
Length of output: 224
open_index must be maintained when points enter and leave the open_list heap.
Line 591 calls push_heap(..., e->open_index, ...) for existing-open nodes, but AStarGrid2D::_solve never sets open_index when a point is added to open_list (line 574), nor resets it when removed (lines ~548–549). This leaves open_index as -1 or stale, causing out-of-bounds array access and heap corruption when reheapifying.
Correctness fix
if (e->open_pass != pass) { // The point wasn't inside the open list.
e->open_pass = pass;
open_list.push_back(e);
+ e->open_index = open_list.size() - 1;
new_point = true; sorter.pop_heap(0, open_list.size(), open_list.ptr()); // Remove the current point from the open list.
open_list.remove_at(open_list.size() - 1);
+ p->open_index = -1;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/math/a_star_grid_2d.cpp` at line 591, AStarGrid2D::_solve is calling
sorter.push_heap with e->open_index for existing-open nodes but never maintains
that field when nodes are inserted or removed from open_list; update the code so
that whenever a node is added to open_list you set e->open_index to its heap
index (after push_heap returns) and whenever a node is removed from open_list
(pop/erase path around lines ~548–549) you reset e->open_index to -1; also
ensure any code paths that call sorter.push_heap or sorter.pop_heap update the
affected entries' open_index consistently so the heap index stored in the node
(open_index) always reflects its position in open_list.
… complexity gates New test files: tests/scene/test_scene_tree_cwe407.h (redot-0001: SceneTree group HashSet) tests/scene/test_gltf_cwe407.h (redot-0008: GLTFState extensions_used HashSet) tests/scene/test_font_cwe407.h (redot-0009/0010: Font cyclic + RID dedup) tests/scene/test_spring_bone_cwe407.h (redot-0012: SpringBone collision API) Extended existing test files: tests/core/math/test_astar.h (redot-0005: open_index maintained, chain/diamond/repeat) tests/scene/test_skeleton_3d.h (redot-0006: child_bones HashSet, 500-bone chain) Each set covers: - Unit: correctness at small scale (dedup, membership, round-trip) - Integration: multi-object interactions (diamond graphs, partial remove, independence) - Complexity gate: N=500–2000 scale tests that would time out with O(N²) regression
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
scene/3d/spring_bone_simulator_3d.cpp (1)
1481-1493:⚠️ Potential issue | 🟠 MajorDeny-list exclusion is order-dependent and can miss excluded colliders.
At Line 1486, indices are appended in
exclude_collisionsorder, but Line 1491-Line 1493 assumes ascending mask order. If the exclude list is unsorted by collision index, some excluded colliders are not filtered.💡 Proposed fix (order-independent, still O(1)/O(N))
- LocalVector<uint32_t> masks; + HashSet<uint32_t> mask_set; Vector<NodePath> &setting_exclude_collisions = settings[i]->exclude_collisions; for (int j = 0; j < setting_exclude_collisions.size(); j++) { Node *n = get_node_or_null(setting_exclude_collisions[j]); if (!n) { continue; } ObjectID id = n->get_instance_id(); // CWE-407 fix (redot-0012): O(1) via HashMap instead of O(N) LocalVector::find. const int *find_ptr = collision_index_map.getptr(id); if (!find_ptr) { setting_exclude_collisions.write[j] = NodePath(); // Clear path if not found. } else { - masks.push_back((uint32_t)*find_ptr); + mask_set.insert((uint32_t)*find_ptr); } } - uint32_t mask_index = 0; for (uint32_t j = 0; j < collisions.size(); j++) { - if (mask_index < masks.size() && j == masks[mask_index]) { - mask_index++; + if (mask_set.has(j)) { continue; } cache.push_back(collisions[j]); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/3d/spring_bone_simulator_3d.cpp` around lines 1481 - 1493, The code builds masks from collision_index_map.getptr(id) using setting_exclude_collisions.write and then iterates collisions assuming masks are sorted, which is order-dependent and can miss exclusions; fix by making the membership check order-independent: after filling masks (the vector populated from collision_index_map.getptr in the loop that references setting_exclude_collisions.write), create an O(1) lookup (e.g., a boolean vector sized collisions.size() or an unordered_set of indices) and mark each mask index, then in the collisions loop check that lookup (e.g., excluded[j]) to skip excluded colliders instead of comparing j to masks[mask_index]; this preserves O(1)/O(N) behavior and removes the sorting assumption.scene/resources/font.cpp (2)
2914-2935:⚠️ Potential issue | 🟡 MinorMissing
thisinsertion in visited set can cause duplicate RIDs.In the first branch (
fallbacks.is_empty() && f.is_valid()),thisis never added to the visited set before processing the base font's fallbacks. If the fallback chain leads back tothisFontVariation, its RID will be added twice:
- Line 2922:
rids.push_back(rid)addsthis->_get_rid()- When
_update_rids_fbencountersthisagain, it won't be invisited, so the RID is added againThis differs from the
elsebranch where_update_rids_fb(this, ...)properly addsthisto visited first.Proposed fix
void FontVariation::_update_rids() const { Ref<Font> f = _get_base_font_or_default(); rids.clear(); // CWE-407 fix (redot-0010): visited guard makes traversal O(N) instead of O(N²). HashSet<const Font *> visited; if (fallbacks.is_empty() && f.is_valid()) { + visited.insert(this); RID rid = _get_rid(); if (rid.is_valid()) { rids.push_back(rid); } const TypedArray<Font> &base_fallbacks = f->get_fallbacks(); for (int i = 0; i < base_fallbacks.size(); i++) { Ref<Font> fb_font = base_fallbacks[i]; _update_rids_fb(fb_font.ptr(), 0, &visited); } } else { _update_rids_fb(this, 0, &visited); } dirty_rids = false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/resources/font.cpp` around lines 2914 - 2935, In FontVariation::_update_rids ensure 'this' is added to the visited HashSet in the branch where fallbacks.is_empty() && f.is_valid() before you push this->_get_rid() and traverse base_fallbacks; specifically, insert 'this' into the existing visited set so that subsequent calls to _update_rids_fb cannot revisit and duplicate this font's RID (keep using the existing visited HashSet, _get_rid(), and call _update_rids_fb(fb_font.ptr(), 0, &visited) as before).
3206-3227:⚠️ Potential issue | 🟡 MinorSame issue: missing
thisinsertion in visited set.Same pattern as
FontVariation::_update_rids— the first branch doesn't addthisto the visited set, which could lead to duplicate RIDs if the fallback chain referencesthisSystemFont.Proposed fix
void SystemFont::_update_rids() const { Ref<Font> f = _get_base_font_or_default(); rids.clear(); // CWE-407 fix (redot-0010): visited guard makes traversal O(N) instead of O(N²). HashSet<const Font *> visited; if (fallbacks.is_empty() && f.is_valid()) { + visited.insert(this); RID rid = _get_rid(); if (rid.is_valid()) { rids.push_back(rid); } const TypedArray<Font> &base_fallbacks = f->get_fallbacks(); for (int i = 0; i < base_fallbacks.size(); i++) { Ref<Font> fb_font = base_fallbacks[i]; _update_rids_fb(fb_font.ptr(), 0, &visited); } } else { _update_rids_fb(this, 0, &visited); } dirty_rids = false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scene/resources/font.cpp` around lines 3206 - 3227, SystemFont::_update_rids fails to insert this into the visited HashSet in the branch where fallbacks.is_empty() and f.is_valid(), which can allow duplicate RIDs if a fallback chain refers back to this SystemFont; fix by inserting this (as a const Font*) into the visited set before iterating base_fallbacks or calling _update_rids_fb so that _update_rids_fb sees this as visited; update the visited manipulation to use the same HashSet<const Font *> visited passed into _update_rids_fb to avoid revisiting the current SystemFont.modules/gltf/gltf_document.cpp (1)
252-265:⚠️ Potential issue | 🟡 MinorKeep the temporary extension lists set-backed until the end.
After the HashSet is flattened back to
Vectors, Line 260, Line 263, and Line 264 append again withpush_back(). A parse→export round-trip of a file that already declaredKHR_lights_punctualorKHR_texture_transformwill emit duplicate entries inextensionsUsed/extensionsRequired.🛠️ Proposed fix
Error GLTFDocument::_serialize_gltf_extensions(Ref<GLTFState> p_state) const { - Vector<String> extensions_used; - for (const String &ext : p_state->extensions_used) { - extensions_used.push_back(ext); - } - Vector<String> extensions_required = p_state->extensions_required; + HashSet<String> extensions_used = p_state->extensions_used; + HashSet<String> extensions_required; + for (const String &ext : p_state->extensions_required) { + extensions_required.insert(ext); + } if (!p_state->lights.is_empty()) { - extensions_used.push_back("KHR_lights_punctual"); + extensions_used.insert("KHR_lights_punctual"); } if (p_state->use_khr_texture_transform) { - extensions_used.push_back("KHR_texture_transform"); - extensions_required.push_back("KHR_texture_transform"); + extensions_used.insert("KHR_texture_transform"); + extensions_required.insert("KHR_texture_transform"); } - if (!extensions_used.is_empty()) { - extensions_used.sort(); - p_state->json["extensionsUsed"] = extensions_used; + Vector<String> extensions_used_vec; + for (const String &ext : extensions_used) { + extensions_used_vec.push_back(ext); + } + if (!extensions_used_vec.is_empty()) { + extensions_used_vec.sort(); + p_state->json["extensionsUsed"] = extensions_used_vec; } - if (!extensions_required.is_empty()) { - extensions_required.sort(); - p_state->json["extensionsRequired"] = extensions_required; + Vector<String> extensions_required_vec; + for (const String &ext : extensions_required) { + extensions_required_vec.push_back(ext); + } + if (!extensions_required_vec.is_empty()) { + extensions_required_vec.sort(); + p_state->json["extensionsRequired"] = extensions_required_vec; } return OK; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/gltf/gltf_document.cpp` around lines 252 - 265, The temporary extension lists are converted from the set (p_state->extensions_used) to Vectors early and then mutated with push_back, causing duplicate entries when an extension already exists; in GLTFDocument::_serialize_gltf_extensions keep the data set-backed until all additions are done: operate on p_state->extensions_used (or a local HashSet copy) to insert "KHR_lights_punctual" and "KHR_texture_transform" conditionally (and insert into a separate required-set for required extensions), then convert these final sets to sorted Vectors (extensions_used/extensions_required) for JSON output—this ensures duplicates are never produced.
♻️ Duplicate comments (1)
core/math/a_star.cpp (1)
345-346:⚠️ Potential issue | 🔴 Critical
open_indexbecomes stale after the first heap mutation.
core/templates/sort_array.h'spush_heap()/pop_heap()reorderopen_listin place, but Lines 345-346, 370-371, 883-884, and 908-909 only updateopen_indexfor the inserted or removed node. After any sift/pop, moved parents/children keep the wrong cached slot, so Lines 386 and 924 can reheapify from the wrong hole and duplicate/drop nodes. Please either maintainPoint::open_indexinside the heap operations, or temporarily fall back toopen_list.find(e)until the heap can own that metadata.Safe short-term fix
- sorter.push_heap(0, e->open_index, 0, e, open_list.ptr()); + sorter.push_heap(0, open_list.find(e), 0, e, open_list.ptr()); … - sorter.push_heap(0, e->open_index, 0, e, open_list.ptr()); + sorter.push_heap(0, open_list.find(e), 0, e, open_list.ptr());Run this to confirm the current branch still has no heap-side index repair. Expected result:
open_indexonly appears in the A* files, while the heap primitives incore/templates/sort_array.honly move pointers.#!/bin/bash echo "=== A* open_index sites ===" rg -n '\bopen_index\b' core/math/a_star.h core/math/a_star.cpp echo -e "\n=== AStar3D reheapify path ===" sed -n '340,390p' core/math/a_star.cpp echo -e "\n=== AStar2D reheapify path ===" sed -n '880,928p' core/math/a_star.cpp echo -e "\n=== Heap primitives ===" rg -n -A20 -B5 '\b(push_heap|pop_heap)\b' core/templates/sort_array.hAlso applies to: 370-386, 883-884, 908-924
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/a_star.cpp` around lines 345 - 346, open_index cached on Point becomes stale because core/templates/sort_array.h's in-place push_heap()/pop_heap() move pointers without updating Point::open_index; update the A* code to avoid using stale cached slots. Short-term fix: wherever you reheapify or remove/insert into open_list (the AStar3D/AStar2D reheapify paths that call push_heap/pop_heap), resolve the actual slot by calling open_list.find(node) (or otherwise compute the current index) and assign node->open_index to that value before using it, and after any push_heap/pop_heap call update the moved elements’ Point::open_index by locating them via open_list.find() (or scan the affected range) so open_index stays consistent; longer-term fix is to modify the heap primitives in sort_array.h to maintain Point::open_index during swaps inside push_heap/pop_heap.
🧹 Nitpick comments (1)
modules/godot_physics_2d/godot_body_2d.h (1)
177-180: Extract shared index-rebuild logic to one helper.The same clear/rebuild loop appears twice. Centralizing it reduces maintenance drift and makes future reorder points (like sorting paths) less likely to miss index synchronization.
Also applies to: 193-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/godot_physics_2d/godot_body_2d.h` around lines 177 - 180, Extract the duplicated clear-and-rebuild loop that populates area_index from areas into a single helper method (e.g., rebuild_area_index or update_area_index) and call that helper from both places where the loop currently exists; specifically move the logic that calls area_index.clear() and then iterates over areas to set area_index[areas[i].area->get_self()] = i into the helper, update both call sites to invoke the helper, and ensure the helper is declared/defined near the class that owns area_index and areas so it can access those members.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modules/godot_physics_2d/godot_body_2d.h`:
- Around line 168-199: The bug is that area_index maps RIDs to vector indices
which become invalid after areas is reordered (areas.sort in godot_body_2d.cpp),
causing add_area()/remove_area() to update the wrong entry; change the approach
so the index map is never stale: stop mapping RID -> int index and instead map
RID -> pointer/reference to the AreaEntry (e.g., areas.write[i] or &areas[i]) or
centralize a rebuild call; implement a rebuild_area_index() helper that
repopulates area_index from the current areas contents and call it from every
place that mutates or reorders areas (add_area, remove_area, after areas.sort in
godot_body_2d.cpp, and any other code that mutates the vector), or change
area_index to store AreaEntry* and update it immediately whenever areas is
reordered so add_area() and remove_area() use stable pointers instead of stale
integer indices.
---
Outside diff comments:
In `@modules/gltf/gltf_document.cpp`:
- Around line 252-265: The temporary extension lists are converted from the set
(p_state->extensions_used) to Vectors early and then mutated with push_back,
causing duplicate entries when an extension already exists; in
GLTFDocument::_serialize_gltf_extensions keep the data set-backed until all
additions are done: operate on p_state->extensions_used (or a local HashSet
copy) to insert "KHR_lights_punctual" and "KHR_texture_transform" conditionally
(and insert into a separate required-set for required extensions), then convert
these final sets to sorted Vectors (extensions_used/extensions_required) for
JSON output—this ensures duplicates are never produced.
In `@scene/3d/spring_bone_simulator_3d.cpp`:
- Around line 1481-1493: The code builds masks from
collision_index_map.getptr(id) using setting_exclude_collisions.write and then
iterates collisions assuming masks are sorted, which is order-dependent and can
miss exclusions; fix by making the membership check order-independent: after
filling masks (the vector populated from collision_index_map.getptr in the loop
that references setting_exclude_collisions.write), create an O(1) lookup (e.g.,
a boolean vector sized collisions.size() or an unordered_set of indices) and
mark each mask index, then in the collisions loop check that lookup (e.g.,
excluded[j]) to skip excluded colliders instead of comparing j to
masks[mask_index]; this preserves O(1)/O(N) behavior and removes the sorting
assumption.
In `@scene/resources/font.cpp`:
- Around line 2914-2935: In FontVariation::_update_rids ensure 'this' is added
to the visited HashSet in the branch where fallbacks.is_empty() && f.is_valid()
before you push this->_get_rid() and traverse base_fallbacks; specifically,
insert 'this' into the existing visited set so that subsequent calls to
_update_rids_fb cannot revisit and duplicate this font's RID (keep using the
existing visited HashSet, _get_rid(), and call _update_rids_fb(fb_font.ptr(), 0,
&visited) as before).
- Around line 3206-3227: SystemFont::_update_rids fails to insert this into the
visited HashSet in the branch where fallbacks.is_empty() and f.is_valid(), which
can allow duplicate RIDs if a fallback chain refers back to this SystemFont; fix
by inserting this (as a const Font*) into the visited set before iterating
base_fallbacks or calling _update_rids_fb so that _update_rids_fb sees this as
visited; update the visited manipulation to use the same HashSet<const Font *>
visited passed into _update_rids_fb to avoid revisiting the current SystemFont.
---
Duplicate comments:
In `@core/math/a_star.cpp`:
- Around line 345-346: open_index cached on Point becomes stale because
core/templates/sort_array.h's in-place push_heap()/pop_heap() move pointers
without updating Point::open_index; update the A* code to avoid using stale
cached slots. Short-term fix: wherever you reheapify or remove/insert into
open_list (the AStar3D/AStar2D reheapify paths that call push_heap/pop_heap),
resolve the actual slot by calling open_list.find(node) (or otherwise compute
the current index) and assign node->open_index to that value before using it,
and after any push_heap/pop_heap call update the moved elements’
Point::open_index by locating them via open_list.find() (or scan the affected
range) so open_index stays consistent; longer-term fix is to modify the heap
primitives in sort_array.h to maintain Point::open_index during swaps inside
push_heap/pop_heap.
---
Nitpick comments:
In `@modules/godot_physics_2d/godot_body_2d.h`:
- Around line 177-180: Extract the duplicated clear-and-rebuild loop that
populates area_index from areas into a single helper method (e.g.,
rebuild_area_index or update_area_index) and call that helper from both places
where the loop currently exists; specifically move the logic that calls
area_index.clear() and then iterates over areas to set
area_index[areas[i].area->get_self()] = i into the helper, update both call
sites to invoke the helper, and ensure the helper is declared/defined near the
class that owns area_index and areas so it can access those members.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c2adf588-aca2-4036-8e54-51f5e050b5de
📒 Files selected for processing (19)
core/math/a_star.cppcore/math/a_star.hcore/math/a_star_grid_2d.cppcore/math/a_star_grid_2d.heditor/import/3d/post_import_plugin_skeleton_rest_fixer.cppmodules/gltf/gltf_document.cppmodules/gltf/gltf_state.cppmodules/gltf/gltf_state.hmodules/godot_physics_2d/godot_body_2d.hmodules/godot_physics_3d/godot_body_3d.hmodules/godot_physics_3d/godot_soft_body_3d.cppscene/3d/skeleton_3d.cppscene/3d/skeleton_3d.hscene/3d/spring_bone_simulator_3d.cppscene/gui/graph_edit_arranger.cppscene/main/scene_tree.cppscene/main/scene_tree.hscene/resources/font.cppscene/resources/font.h
✅ Files skipped from review due to trivial changes (3)
- modules/gltf/gltf_state.cpp
- core/math/a_star_grid_2d.h
- scene/3d/skeleton_3d.cpp
🚧 Files skipped from review as they are similar to previous changes (6)
- core/math/a_star_grid_2d.cpp
- scene/main/scene_tree.h
- scene/3d/skeleton_3d.h
- modules/godot_physics_3d/godot_body_3d.h
- scene/gui/graph_edit_arranger.cpp
- editor/import/3d/post_import_plugin_skeleton_rest_fixer.cpp
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (4)
tests/scene/test_scene_tree_cwe407.h (1)
51-64: Suppress expected error output for the double-add case.Line 57 intentionally triggers the
ERR_FAIL_COND_V_MSGpath inadd_to_group()(seescene/main/scene_tree.cpp:176), which prints an error message. To keep test output clean, wrap the second call with error suppression macros.♻️ Proposed fix to suppress expected error
node->add_to_group("test_group"); + ERR_PRINT_OFF; node->add_to_group("test_group"); // second add must be a no-op + ERR_PRINT_ON;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/scene/test_scene_tree_cwe407.h` around lines 51 - 64, The test intentionally triggers the ERR_FAIL_COND_V_MSG path in Node::add_to_group() causing noisy error output; suppress this by wrapping the second call to node->add_to_group("test_group") with the test-suite error-suppression macros used elsewhere (i.e., the macros that silence ERR_* output in tests) so the behavioral check in TEST_CASE("[SceneTree][CWE-407] Group membership: no duplicate on double add") stays the same but the expected error message is not printed; locate the call to add_to_group in this test and apply the suppression macros around that single invocation.tests/scene/test_skeleton_3d.h (1)
147-150: Stress loop should assert child identity, not just child count.Lines 147-150 only verify
size() == 1. This misses wiring errors where each parent has one wrong child. Assertchildren[0] == i + 1too.Proposed strengthening
for (int i = 0; i < N - 1; i++) { Vector<int> children = skeleton->get_bone_children(i); CHECK_MESSAGE(children.size() == 1, vformat("Bone %d must have exactly 1 child.", i)); + CHECK_MESSAGE(children[0] == i + 1, vformat("Bone %d must point to child %d.", i, i + 1)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/scene/test_skeleton_3d.h` around lines 147 - 150, The loop currently only checks children.size() == 1 which allows wrong child indices; update the assertion in the loop that iterates i to also verify the sole child identity by asserting children[0] == i + 1 (use CHECK_MESSAGE to include a clear message like "Bone %d must have child %d" showing i and i+1). Locate the loop using skeleton->get_bone_children and CHECK_MESSAGE and add the extra equality check for children[0] == i + 1 alongside the size check.tests/scene/test_spring_bone_cwe407.h (1)
131-154: Tighten “complexity” test semantics to avoid false confidence.Line 131’s title reads like a complexity gate, but this case is currently structural smoke only. Consider renaming it (or adding an explicit TODO marker for a timed integration gate) so regressions are less likely to slip through on interpretation.
Proposed naming/intent clarification
-TEST_CASE("[SpringBoneSimulator3D][CWE-407] Complexity annotation: _process_bone O(N+S×C)") { +TEST_CASE("[SpringBoneSimulator3D][CWE-407] Complexity smoke annotation: structural API flow only") { @@ - // The actual O(N + S×C) timing gate requires a physics-enabled scene. + // TODO: Add physics-enabled timed integration gate in simulator harness. + // This case is structural smoke only (non-timed).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/scene/test_spring_bone_cwe407.h` around lines 131 - 154, Rename the test or add an explicit TODO marker to avoid implying it's a timed complexity gate: update the TEST_CASE name "[SpringBoneSimulator3D][CWE-407] Complexity annotation: _process_bone O(N+S×C)" to something like "[SpringBoneSimulator3D] Structural smoke test: collision slot management" (or append "TODO: timed integration required" to the name), and keep the body that exercises SpringBoneSimulator3D using set_setting_count, set_collision_count, get_collision_count, clear_collisions unchanged so the test remains a structural verification rather than a claimed complexity benchmark.tests/core/math/test_astar.h (1)
420-423: Repeated-solve assertions are too weak for regression detection.At Line 422, checking only
path.size() == Ncan miss malformed-but-length-correct paths. Assert ordered node sequence (or at least endpoints + monotonic adjacency) per repetition.Suggested assertion strengthening
for (int rep = 0; rep < 5; rep++) { Vector<int64_t> path = a.get_id_path(0, N - 1); CHECK_MESSAGE(path.size() == N, vformat("Solve #%d must find full chain path.", rep + 1)); + if (path.size() == N) { + CHECK_MESSAGE(path[0] == 0, vformat("Solve #%d must start at 0.", rep + 1)); + CHECK_MESSAGE(path[N - 1] == N - 1, vformat("Solve #%d must end at N-1.", rep + 1)); + for (int i = 0; i < N; i++) { + CHECK_MESSAGE(path[i] == i, vformat("Solve #%d path must remain ordered.", rep + 1)); + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/math/test_astar.h` around lines 420 - 423, Replace the weak size-only assertion on the repeated solves: in the loop that calls a.get_id_path(0, N - 1) and currently uses CHECK_MESSAGE(path.size() == N, ...), add deterministic content checks using the same Vector<int64_t> path and CHECK_MESSAGE: assert path.front() == 0 and path.back() == N - 1, and assert adjacency/ordering by iterating consecutive elements (e.g., for i from 1 to path.size()-1 ensure path[i] == path[i-1] + 1 or at least abs(path[i] - path[i-1]) == 1 depending on expected behavior) so each repetition verifies the actual node sequence, not just length.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/core/math/test_astar.h`:
- Around line 361-364: Update the test header comments that claim correctness
depends on fully resetting open_index: reword them to state that open_pass is
the primary per-solve membership guard and that open_index only stores
heap-position metadata (not the authoritative per-pass membership), and clarify
that open_index need not be fully reset between solves; adjust the phrasing
around references to open_list.find() and open_index to avoid implying
open_index semantics control correctness (mention symbols open_pass, open_index,
and open_list.find() so reviewers can locate the lines to edit).
In `@tests/scene/test_font_cwe407.h`:
- Around line 88-90: Remove the unused TypedArray named c_fallbacks and its
push_back call: the local variable c_fallbacks and the line
c_fallbacks.push_back(Variant()) should be deleted since they are never passed
to Font::set_fallbacks() (no call to set_fallbacks on variable c), leaving C
with no fallbacks as intended; search for the symbol c_fallbacks in the test and
remove both its declaration and the subsequent push_back to eliminate the dead
code.
In `@tests/scene/test_gltf_cwe407.h`:
- Around line 50-136: Add explicit assertions to each test to ensure
deduplication instead of relying on "no crash": in the first test assert that
state->get_unique_names().size() == 1 after repeated add_used_extension calls;
in the multi-add/required test assert that state2 either exposes an accessor
like state2->get_extensions_required().size() == 1 (or add such an accessor on
GLTFState) and that both the used set and required vector contain the expected
single entries after repeated calls to add_used_extension("KHR_required_ext",
true); similarly, in the other tests assert that after re-adding the distinct
extensions the unique names count equals the initial extensions.size() and in
the stress test assert state->get_unique_names().size() == 2 after adding the
extra "KHR_texture_transform" to verify O(1) behavior and correctness.
- Around line 121-131: The test currently adds the same extension 1000 times so
it never exercises the worst-case O(E²) membership scans; update the TEST_CASE
"[GLTFState][CWE-407][Stress] extensions_used: 1000 repeated adds, O(1) each" to
first add 1000 distinct extension strings (e.g. generate "KHR_materials_unlit_0"
.. "_999") via state->add_used_extension after state.instantiate(), then in a
second loop re-add those same 1000 names to force duplicate-check behavior and
trigger the Vector::has() regression if present; this targets
GLTFState::add_used_extension and ensures the stress pattern actually measures
the O(E²) path.
In `@tests/scene/test_skeleton_3d.h`:
- Around line 93-97: The test currently calls set_bone_parent(1, 0) only once so
it never verifies duplicate prevention; update the test in test_skeleton_3d.h to
call skeleton->set_bone_parent(1, 0) a second time (same arguments) before
retrieving children, then keep the CHECK_MESSAGE asserting children.size() == 1
and CHECK_MESSAGE(children[0] == 1) to confirm the second call did not create a
duplicate entry; reference the existing calls to set_bone_parent and
get_bone_children to locate where to insert the duplicate call.
- Around line 120-122: The test currently only checks root_children.size() == 2
but not which bones are present; update the test around
skeleton->get_bone_children(root) to assert that the two returned child IDs
correspond to bones "a" and "c" regardless of order by checking membership
(e.g., verify both "a" and "c" are in root_children and no other IDs are
present). Use the existing variables root_children and the known bone
identifiers ("a", "c") to perform order-independent contains checks and still
assert the size is 2 to prevent extras.
---
Nitpick comments:
In `@tests/core/math/test_astar.h`:
- Around line 420-423: Replace the weak size-only assertion on the repeated
solves: in the loop that calls a.get_id_path(0, N - 1) and currently uses
CHECK_MESSAGE(path.size() == N, ...), add deterministic content checks using the
same Vector<int64_t> path and CHECK_MESSAGE: assert path.front() == 0 and
path.back() == N - 1, and assert adjacency/ordering by iterating consecutive
elements (e.g., for i from 1 to path.size()-1 ensure path[i] == path[i-1] + 1 or
at least abs(path[i] - path[i-1]) == 1 depending on expected behavior) so each
repetition verifies the actual node sequence, not just length.
In `@tests/scene/test_scene_tree_cwe407.h`:
- Around line 51-64: The test intentionally triggers the ERR_FAIL_COND_V_MSG
path in Node::add_to_group() causing noisy error output; suppress this by
wrapping the second call to node->add_to_group("test_group") with the test-suite
error-suppression macros used elsewhere (i.e., the macros that silence ERR_*
output in tests) so the behavioral check in TEST_CASE("[SceneTree][CWE-407]
Group membership: no duplicate on double add") stays the same but the expected
error message is not printed; locate the call to add_to_group in this test and
apply the suppression macros around that single invocation.
In `@tests/scene/test_skeleton_3d.h`:
- Around line 147-150: The loop currently only checks children.size() == 1 which
allows wrong child indices; update the assertion in the loop that iterates i to
also verify the sole child identity by asserting children[0] == i + 1 (use
CHECK_MESSAGE to include a clear message like "Bone %d must have child %d"
showing i and i+1). Locate the loop using skeleton->get_bone_children and
CHECK_MESSAGE and add the extra equality check for children[0] == i + 1
alongside the size check.
In `@tests/scene/test_spring_bone_cwe407.h`:
- Around line 131-154: Rename the test or add an explicit TODO marker to avoid
implying it's a timed complexity gate: update the TEST_CASE name
"[SpringBoneSimulator3D][CWE-407] Complexity annotation: _process_bone O(N+S×C)"
to something like "[SpringBoneSimulator3D] Structural smoke test: collision slot
management" (or append "TODO: timed integration required" to the name), and keep
the body that exercises SpringBoneSimulator3D using set_setting_count,
set_collision_count, get_collision_count, clear_collisions unchanged so the test
remains a structural verification rather than a claimed complexity benchmark.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f7f6e811-0ebf-4000-b35f-3920fb375778
📒 Files selected for processing (7)
tests/core/math/test_astar.htests/scene/test_font_cwe407.htests/scene/test_gltf_cwe407.htests/scene/test_scene_tree_cwe407.htests/scene/test_skeleton_3d.htests/scene/test_spring_bone_cwe407.htests/test_main.cpp
✅ Files skipped from review due to trivial changes (1)
- tests/test_main.cpp
Code fixes: - redot-0005 (A*): revert broken open_index heap-position tracking. SortArray::push_heap reorders elements without updating moved nodes' open_index, so decrease-key used stale positions. Reverted to open_list.find(e) (correct). Removed open_index field from Point structs in a_star.h and a_star_grid_2d.h. AStarGrid2D also never set open_index. - redot-0006 (Skeleton3D): child_bones HashSet broke indexed access (operator[]) and return type (get_bone_children returns Vector<int>). Switched to shadow index pattern: Vector<int> child_bones preserved, HashSet<int> child_bones_set added for O(1) has() guard. Both cleared/inserted in sync. - redot-0002/0003 (physics areas): area_index stored Vector positions which become stale after areas.sort() on equal-priority ties. Replaced with area_refcount storing refCount per RID. Hot path (existing overlap) is O(1) HashMap increment. Final removal still uses O(N) areas.find() — only on overlap end, not per tick. - redot-0008 (GLTF KHR_node_visibility): extensions_required.push_back() was called directly inside per-node loop, bypassing the has() guard and duplicating the entry for every hidden node. Replaced with add_used_extension() which guards both sets. Test fixes: - AStar detour test: diamond graph had equal-cost paths, making path[1]==2 assertion non-deterministic. Replaced with custom _compute_cost subclass for asymmetric costs. - GLTF stress test: adding same extension 1000 times kept cardinality at 1 (O(1) with Vector too). Changed to 500 distinct extensions + re-add all to expose O(N²) regression. - Skeleton duplicate test: only called set_bone_parent once; now calls twice to actually exercise the dedup guard. - Skeleton tree test: added has(a)/has(c) membership checks on root children, not just size. - Font test: removed unused dead code (c_fallbacks created but never applied).
There was a problem hiding this comment.
♻️ Duplicate comments (1)
modules/gltf/gltf_document.cpp (1)
253-264:⚠️ Potential issue | 🟡 MinorAvoid reintroducing duplicate extension names after copying from the set.
Because Line 255 seeds
extensions_usedfromp_state->extensions_used, Lines 260-264 can still emit duplicates on round-trip. An imported asset that already advertisesKHR_lights_punctualorKHR_texture_transformwill get a second copy here, andextensionsRequiredcan duplicateKHR_texture_transformfor the same reason.Possible fix
Error GLTFDocument::_serialize_gltf_extensions(Ref<GLTFState> p_state) const { - // CWE-407 fix (redot-0008): extensions_used is now HashSet — convert to sorted Vector for JSON. - Vector<String> extensions_used; - for (const String &ext : p_state->extensions_used) { - extensions_used.push_back(ext); - } - Vector<String> extensions_required = p_state->extensions_required; + HashSet<String> extensions_used_set = p_state->extensions_used; + HashSet<String> extensions_required_set; + for (const String &ext : p_state->extensions_required) { + extensions_required_set.insert(ext); + } if (!p_state->lights.is_empty()) { - extensions_used.push_back("KHR_lights_punctual"); + extensions_used_set.insert("KHR_lights_punctual"); } if (p_state->use_khr_texture_transform) { - extensions_used.push_back("KHR_texture_transform"); - extensions_required.push_back("KHR_texture_transform"); + extensions_used_set.insert("KHR_texture_transform"); + extensions_required_set.insert("KHR_texture_transform"); } + Vector<String> extensions_used; + for (const String &ext : extensions_used_set) { + extensions_used.push_back(ext); + } + Vector<String> extensions_required; + for (const String &ext : extensions_required_set) { + extensions_required.push_back(ext); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/gltf/gltf_document.cpp` around lines 253 - 264, The code copies p_state->extensions_used (a HashSet) into a Vector<String> extensions_used then unconditionally push_back's "KHR_lights_punctual" and "KHR_texture_transform", which can reintroduce duplicates; update the logic in the block that builds extensions_used and extensions_required (references: extensions_used, p_state->extensions_used, extensions_required, p_state->extensions_required, p_state->lights, p_state->use_khr_texture_transform) to avoid duplicates—either by inserting into a temporary HashSet and then converting to a sorted Vector, or by checking contains() before push_back, and likewise ensure extensions_required is deduplicated before final use.
🧹 Nitpick comments (2)
modules/godot_physics_3d/godot_body_3d.h (1)
178-189: Consider defensive check forareas.find()result.The refcount logic is correct, and the O(N)
find()only occurs on final removal (when refcount drops below 1), which preserves the performance improvement for the common case.However, line 185 directly passes
areas.find(AreaCMP(p_area))toremove_at()without checking the return value. If the invariant is ever broken (e.g., by a bug elsewhere or during future refactoring),find()returning-1would causeremove_at(-1)to hit an assertion or cause undefined behavior.🛡️ Optional defensive check
_FORCE_INLINE_ void remove_area(GodotArea3D *p_area) { // CWE-407 fix (redot-0003): O(1) refCount check — O(N) Vector find only on final removal. RID rid = p_area->get_self(); HashMap<RID, int>::Iterator it = area_refcount.find(rid); if (it != area_refcount.end()) { it->value -= 1; if (it->value < 1) { - areas.remove_at(areas.find(AreaCMP(p_area))); + int idx = areas.find(AreaCMP(p_area)); + ERR_FAIL_COND(idx == -1); // Invariant violated: refcount exists but area not in Vector. + areas.remove_at(idx); area_refcount.erase(it); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/godot_physics_3d/godot_body_3d.h` around lines 178 - 189, In remove_area, guard the final O(N) removal by storing the result of areas.find(AreaCMP(p_area)) in a local int (e.g. idx) and check idx != -1 before calling areas.remove_at(idx); if idx == -1, skip remove_at and emit a warning/error (e.g. ERR_PRINT/WARN_PRINT) so you don't call remove_at(-1) and still proceed to erase the HashMap entry via area_refcount.erase(it) to keep refcount state consistent; this change touches remove_area(), areas.find(...) and areas.remove_at(...) usage.tests/scene/test_gltf_cwe407.h (1)
50-71: Consider adding assertions via JSON inspection.The test relies on "no crash" as the primary invariant. Since
get_json()is already called at line 66, you could inspect the"extensionsRequired"array within the JSON dictionary to explicitly verify deduplication.💡 Optional: Add explicit assertion via JSON
// Serialize to JSON to observe extensions_required count. Dictionary json = state->get_json(); + // Verify extensions_required has exactly 2 unique entries. + if (json.has("extensionsRequired")) { + Array required = json["extensionsRequired"]; + CHECK_MESSAGE(required.size() == 2, "extensions_required must have exactly 2 unique entries."); + } // The primary invariant: no crash, state stays usable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/scene/test_gltf_cwe407.h` around lines 50 - 71, Add explicit assertions after calling state->get_json() to verify deduplication: use the returned Dictionary from GLTFState::get_json() to check for the "extensionsRequired" key, get its Array/VariantArray value and assert its size equals 2 and that it contains "KHR_draco_mesh_compression" and "KHR_mesh_quantization" exactly once; reference GLTFState, add_used_extension, get_json and the "extensionsRequired" JSON field when locating where to add these checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@modules/gltf/gltf_document.cpp`:
- Around line 253-264: The code copies p_state->extensions_used (a HashSet) into
a Vector<String> extensions_used then unconditionally push_back's
"KHR_lights_punctual" and "KHR_texture_transform", which can reintroduce
duplicates; update the logic in the block that builds extensions_used and
extensions_required (references: extensions_used, p_state->extensions_used,
extensions_required, p_state->extensions_required, p_state->lights,
p_state->use_khr_texture_transform) to avoid duplicates—either by inserting into
a temporary HashSet and then converting to a sorted Vector, or by checking
contains() before push_back, and likewise ensure extensions_required is
deduplicated before final use.
---
Nitpick comments:
In `@modules/godot_physics_3d/godot_body_3d.h`:
- Around line 178-189: In remove_area, guard the final O(N) removal by storing
the result of areas.find(AreaCMP(p_area)) in a local int (e.g. idx) and check
idx != -1 before calling areas.remove_at(idx); if idx == -1, skip remove_at and
emit a warning/error (e.g. ERR_PRINT/WARN_PRINT) so you don't call remove_at(-1)
and still proceed to erase the HashMap entry via area_refcount.erase(it) to keep
refcount state consistent; this change touches remove_area(), areas.find(...)
and areas.remove_at(...) usage.
In `@tests/scene/test_gltf_cwe407.h`:
- Around line 50-71: Add explicit assertions after calling state->get_json() to
verify deduplication: use the returned Dictionary from GLTFState::get_json() to
check for the "extensionsRequired" key, get its Array/VariantArray value and
assert its size equals 2 and that it contains "KHR_draco_mesh_compression" and
"KHR_mesh_quantization" exactly once; reference GLTFState, add_used_extension,
get_json and the "extensionsRequired" JSON field when locating where to add
these checks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ceaac7a1-7ce5-4dec-8f22-aa5f7ba09fec
📒 Files selected for processing (9)
modules/gltf/gltf_document.cppmodules/godot_physics_2d/godot_body_2d.hmodules/godot_physics_3d/godot_body_3d.hscene/3d/skeleton_3d.cppscene/3d/skeleton_3d.htests/core/math/test_astar.htests/scene/test_font_cwe407.htests/scene/test_gltf_cwe407.htests/scene/test_skeleton_3d.h
✅ Files skipped from review due to trivial changes (1)
- tests/scene/test_skeleton_3d.h
🚧 Files skipped from review as they are similar to previous changes (3)
- scene/3d/skeleton_3d.h
- tests/core/math/test_astar.h
- modules/godot_physics_2d/godot_body_2d.h
…rengthen GLTF test assertions
Fixes 12 algorithmic complexity defects (CWE-407) across Redot Engine. All defects use Vector::has() or Vector::find() as membership/lookup tests inside hot loops, giving O(N²) cost. Each fix adds a HashSet or HashMap shadow index for O(1) lookups while preserving the original Vector.
redot-0001 scene/main/scene_tree.cpp
Group::nodes.has(p_node) — O(n) per add_to_group(), fires every frame.
Fix: HashSet<Node*> node_set shadow in struct Group. 1,000x at n=2,000.
redot-0002 modules/godot_physics_2d/godot_body_2d.h
areas.find(AreaCMP(p_area)) — O(n) per physics tick in 2D area overlap.
Fix: HashMap<RID,int> area_index shadow. 50x at 500 bodies x 200 areas.
redot-0003 modules/godot_physics_3d/godot_body_3d.h
Same as redot-0002, 3D physics variant. 50x speedup.
redot-0004 modules/godot_physics_3d/godot_soft_body_3d.cpp
node_links[ia].has(ib) — O(n) per link in bending constraint generation.
Fix: LocalVector<HashSet> node_link_set shadow per node.
redot-0005 core/math/a_star.cpp + a_star_grid_2d.cpp
open_list.find(e) — O(N) per neighbor relaxation in A* decrease-key path.
Fix: open_index field on Point; maintained on push/pop. O(N²) -> O(N log N).
Affects AStar3D, AStar2D, AStarGrid2D.
redot-0006 scene/3d/skeleton_3d.cpp
child_bones.has(i) — O(B) per bone in _update_process_order().
Fix: child_bones changed from Vector to HashSet.
redot-0007 editor/import/3d/post_import_plugin_skeleton_rest_fixer.cpp
Two Vector-as-sets: bones_to_process and keep_bone_rest.
Both call .has() inside track loops. Fix: both to HashSet.
redot-0008 modules/gltf/gltf_state.h + gltf_document.cpp + gltf_state.cpp
extensions_used — Vector.has() O(E) per node/animation/track.
Fix: HashSet. insert() is idempotent O(1). All callers updated.
redot-0009 scene/resources/font.cpp
Font::_is_cyclic() — no visited set, O(F^D) on diamond fallback graphs.
Fix: HashSet<const Font*> visited threaded via _is_cyclic_internal().
redot-0010 scene/resources/font.cpp
Font::_update_rids_fb() — no visited set, O(N²) diamond re-traversal.
Fix: HashSet<const Font*>* r_visited param. FontVariation + SystemFont fixed.
redot-0011 scene/gui/graph_edit_arranger.cpp
ORDER and PRED macros use Vector::find() — O(N) per connection.
Fix: pre-build HashMap<StringName,int> node_order and
HashMap<StringName,StringName> predecessor_map. O(C*N) -> O(N+C).
redot-0012 scene/3d/spring_bone_simulator_3d.cpp
collisions.has(id) and collisions.find(id) — O(N) inside SC loops per tick.
Fix: HashSet collision_set + HashMap<ObjectID,int> collision_index_map.
O(SCN) -> O(N + SC) per tick.
All fixes use Redot's own HashSet and HashMap<K,V> from core/templates/. No new dependencies. Vectors preserved where ordered iteration is needed.
Discovered by undefect.com — CWE-407 sedimentary defect campaign. Contact: security@undefect.com
Summary by CodeRabbit
New Features
Bug Fixes
Performance
Tests