Skip to content

feat(warp-core): Phase 4 provenance closeout and Phase 5 observation contract#302

Open
flyingrobots wants to merge 10 commits intomainfrom
feat/adr-0008-0009-phase-4
Open

feat(warp-core): Phase 4 provenance closeout and Phase 5 observation contract#302
flyingrobots wants to merge 10 commits intomainfrom
feat/adr-0008-0009-phase-4

Conversation

@flyingrobots
Copy link
Owner

@flyingrobots flyingrobots commented Mar 15, 2026

Summary

This branch closes the Phase 4 provenance/BTR substrate work and immediately pivots Echo’s read side to an explicit observation contract for Phase 5.

The main effect is that reads stop pretending to be timeless snapshots. The canonical internal read path is now observe(...), which always makes the worldline, coordinate, frame, and projection explicit.

What Changed

  • closes Phase 4 docs/status around entry-based provenance and BTR-ready packaging
  • adds ADR-0011 for the explicit observation contract
  • adds warp_core::ObservationService and the new observation types in warp-core
  • centralizes frame/projection validation and deterministic observation errors
  • adds canonical observation artifact hashing from canonical CBOR input
  • wires warp-wasm through observe(...) as the canonical read boundary
  • keeps get_head, snapshot_at, execute_query, and drain_view_ops as one-phase adapters only
  • keeps RecordedTruth strictly provenance-backed; no hidden engine reruns during observation
  • migrates ttd-browser to the entry-based provenance API after the Phase 4 hard cut removed the old convenience methods

Notes

  • execute_query(...) now lowers through observation semantics and returns deterministic UNSUPPORTED_QUERY until real query support lands.
  • drain_view_ops() is legacy adapter/debug territory only in this phase. It reads recorded truth through observation and uses adapter-local bookkeeping instead of mutating runtime/provenance state.
  • The compatibility layer is intentionally short-lived: the plan and ADR now pin these legacy read exports for deletion at Phase 6 / ABI v2.

Validation

  • cargo test -p warp-core observation::tests --lib
  • cargo test -p warp-wasm --features engine --lib
  • cargo test -p echo-wasm-abi
  • cargo test -p ttd-browser --lib --tests
  • cargo clippy -p echo-wasm-abi --all-targets -- -D warnings -D missing_docs
  • cargo clippy -p warp-core --lib -- -D warnings -D missing_docs
  • cargo clippy -p warp-wasm --lib --features engine -- -D warnings -D missing_docs
  • push-time full verification lane, including critical-package tests, rustdoc warning gates, banned-pattern scan, SPDX scan, and markdown formatting checks

Summary by CodeRabbit

  • New Features

    • Adds a canonical, read-only observation API (observe) exposed in WASM, new observation request/artifact types, and new observation error codes.
  • Refactor

    • Provenance reworked to an entry-centric model; observation becomes the canonical read path while legacy read helpers are now adapters.
  • Documentation

    • ADRs and spec updated to describe the explicit observation contract, artifact hashing, and migration plan.
  • Chores / Tools

    • Local verification split into parallel lanes with a sequential fallback target; tests updated for new provenance/observation flows.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 15, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds an explicit, read-only observation contract and ABI observe export; converts provenance to an entry-centric model with ProvenanceService and BTRs; threads provenance into runtime commit/stepping; adapts playback and tests to entry-based access; and introduces parallel local verification lanes and related tooling changes.

Changes

Cohort / File(s) Summary
ABI & WASM bindings
crates/echo-wasm-abi/src/kernel_port.rs, crates/warp-wasm/src/lib.rs, docs/spec/SPEC-0009-wasm-abi-v1.md
Adds observe export, new observation DTOs and error codes, extends KernelPort trait with observe(...), and adds WASM shim observe(request_bytes) with ABI error mappings.
Core observation & service
crates/warp-core/src/observation.rs, crates/warp-core/src/lib.rs
New ObservationService, observation types, deterministic artifact hashing, error model, and public exports for observation primitives and artifacts.
Provenance engine & store
crates/warp-core/src/provenance_store.rs, crates/warp-core/src/coordinator.rs, crates/warp-core/src/playback.rs
Replaces tick-aligned model with entry-centric ProvenanceEntry/BtrPayload/BTR, adds ProvenanceService API, appends local commits, validates BTRs, and updates coordinator/playback to use entries and on-demand parents.
Runtime / Kernel integration
crates/warp-wasm/src/warp_kernel.rs, crates/warp-core/src/worldline.rs, crates/warp-core/src/snapshot.rs
WarpKernel now holds ProvenanceService and last_drained_commit_tick; initialization registers provenance; get_head/snapshot/execute_query/drain_view_ops are adapted to observation path; small derive/helper additions.
Tests & test utilities
crates/warp-core/tests/*, crates/warp-core/tests/common/mod.rs, crates/ttd-browser/src/lib.rs
Massive test updates to use entry-based provenance: append_fixture_entry, mirrored_provenance, import changes, and migration from append_with_writesappend_local_commit and entry-based reads.
Playback / materialization tweaks
crates/warp-core/src/materialization/reduce_op.rs, crates/warp-core/src/playback.rs
Playback seek/validation now reads ProvenanceEntry fields (entry.patch, entry.expected, entry.outputs); tests expect explicit empty Vec<u8> for non-Sum reduce ops.
Tooling & local verification
scripts/verify-local.sh, Makefile, scripts/hooks/README.md, tests/hooks/test_verify_local.sh
Introduces parallel local verification lanes with isolated CARGO_TARGET_DIRs, many lane helpers, scope-aware selection, and verify-full-sequential fallback; updates hooks/readme and related tests.
Docs & ADRs
docs/adr/ADR-0011-explicit-observation-contract.md, docs/plans/adr-0008-and-0009.md, docs/spec/*
Adds ADR and plan/spec updates documenting explicit observation contract, artifact hashing, migration phases, provenance semantics, and ABI notes.
Dependency & manifest
crates/warp-core/Cargo.toml
Moves echo-wasm-abi from dev-dependencies to normal dependency (workspace = true).

Sequence Diagram(s)

sequenceDiagram
    participant Client as "Host / Client"
    participant Kernel as "WarpKernel (WASM ABI)"
    participant Observation as "ObservationService"
    participant Provenance as "ProvenanceService"
    participant Runtime as "WorldlineRuntime / Engine"

    Client->>Kernel: observe(request_bytes)
    Kernel->>Observation: to_core_request(decoded)
    Observation->>Provenance: validate worldline, resolve coordinate
    Provenance-->>Observation: entry metadata / parents
    Observation->>Runtime: read state/head for resolved tick
    Runtime-->>Observation: head/snapshot/truth payload
    Observation->>Observation: compute artifact_hash (CBOR + blake3)
    Observation-->>Kernel: ObservationArtifact (encoded)
    Kernel-->>Client: return encoded result
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

tooling

Poem

observe and prove, a careful art,
entries stitch history, hashes part.
Lanes run parallel, tests reframe,
artifacts signed — the read's new name.
Provenance hums; the runtime keeps its heart.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately summarizes the main change: closing Phase 4 provenance work and introducing Phase 5 observation contract as the canonical read path.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/adr-0008-0009-phase-4
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the high-level summary generated by CodeRabbit.

Configure the reviews.high_level_summary_instructions setting to provide custom instructions for generating the high-level summary.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dcb5f65678

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

outputs: source_history.outputs[..end_idx].to_vec(),
atom_writes: source_history.atom_writes[..end_idx].to_vec(),
initial_boundary_hash: source_history.initial_boundary_hash,
entries: source_history.entries[..end_idx].to_vec(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Rebind forked provenance entries to the target worldline

fork(...) copies the source entries slice verbatim, but each ProvenanceEntry now embeds worldline_id (and head_key) so the forked prefix still points at the source worldline. This makes forked history internally inconsistent: tip_ref(target) can return parent refs for the source worldline, and build_btr(target, ...) will fail BtrPayload::validate() with MixedWorldline once you try to package/export the fork. The copied entries (and any embedded head worldline ids) need to be rewritten to new_id during fork creation.

Useful? React with 👍 / 👎.

message: e.to_string(),
})
fn snapshot_at(&mut self, tick: u64) -> Result<Vec<u8>, AbiError> {
Self::snapshot_bytes_from_observation(self.observe_core(ObservationRequest {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve legacy tick error code in snapshot_at adapter

snapshot_at now forwards observation errors directly, so an out-of-range legacy snapshot request returns observation code INVALID_TICK (9) instead of the v1 legacy snapshot code LEGACY_INVALID_TICK (4). This changes the established snapshot_at error contract for existing host adapters without an ABI version bump (SPEC-0009 explicitly reserves code 4 for legacy snapshot/history tick bounds). The adapter should remap invalid-tick failures back to the legacy code before returning.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/ttd-browser/src/lib.rs (1)

482-490: ⚠️ Potential issue | 🟠 Major

Compute the real emissions digest from recorded outputs here.

This change now has access to entry.outputs, but it immediately discards them and keeps returning patch_digest as a proxy. That makes get_emissions_digest() disagree with the digest produced in commit_inner() and breaks any caller treating this as the actual emissions hash.

Suggested fix
-        let expected = self
-            .provenance
-            .entry(cursor.worldline_id, cursor.tick - 1)
-            .map(|entry| entry.expected)
-            .map_err(|e| JsError::new(&e.to_string()))?;
-
-        // Return patch_digest as a proxy; actual emissions_digest would need
-        // to be computed from the MBUS outputs stored in provenance.
-        Ok(hash_to_uint8array(&expected.patch_digest))
+        let entry = self
+            .provenance
+            .entry(cursor.worldline_id, cursor.tick - 1)
+            .map_err(|e| JsError::new(&e.to_string()))?;
+
+        let finalized_channels: Vec<FinalizedChannel> = entry
+            .outputs
+            .into_iter()
+            .map(|(channel, data)| FinalizedChannel { channel, data })
+            .collect();
+        let emissions_digest = compute_emissions_digest(&finalized_channels);
+        Ok(hash_to_uint8array(&emissions_digest))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/ttd-browser/src/lib.rs` around lines 482 - 490, The current code
fetches provenance.entry(...) and then returns expected.patch_digest as a proxy;
instead compute the real emissions digest from the recorded MBUS outputs on that
provenance entry (use the same serialization + hashing logic used in
commit_inner) instead of using expected.patch_digest. Locate the call that maps
the entry (provenance.entry(cursor.worldline_id, cursor.tick - 1)) and, using
the entry.outputs field, reproduce the emissions hash computation from
commit_inner (serialize the outputs deterministically and run the same hash
function), then return hash_to_uint8array(&emissions_digest) so
get_emissions_digest() matches commit_inner’s digest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/echo-wasm-abi/src/kernel_port.rs`:
- Around line 51-68: The numeric value for INVALID_TICK changed (was 4, now 9)
and LEGACY_INVALID_TICK now uses 4 which can break ABI consumers; either restore
INVALID_TICK to its original value or treat this as an intentional ABI change by
incrementing the ABI version and adding migration notes. Concretely, update the
constants in kernel_port.rs so INVALID_TICK has the previous code (4) and move
legacy naming elsewhere, or if you intend the renumbering, bump the ABI_VERSION
constant and add a clear migration note in the release/migration docs and notify
JS callers to handle LEGACY_INVALID_TICK vs INVALID_TICK mappings; reference the
symbols INVALID_TICK, LEGACY_INVALID_TICK and ABI_VERSION when making the
change.

In `@crates/ttd-browser/src/lib.rs`:
- Around line 1375-1391: The test currently constructs a ProvenanceEntry via
ProvenanceEntry::local_commit and appends it with
engine.provenance.append_local_commit(entry) but never asserts the
emissions_digest; decode the resulting receipt bytes into a TTDR frame (parse
the receipt/TTDR header produced by commit_inner), compute the expected digest
with compute_emissions_digest(outputs) and add an assert that
header.emissions_digest == compute_emissions_digest(&outputs); ensure you locate
this check immediately after the append_local_commit call so it fails if
commit_inner writes an all-zero or stale digest.

In `@crates/warp-core/src/coordinator.rs`:
- Around line 353-355: The current SuperTick hot path clones the entire
provenance store (provenance.clone()) into provenance_before (and similarly
clones runtime into runtime_before), causing heavy allocations; change rollback
to take a cheap checkpoint (e.g., record a provenance version/length or call a
lightweight provenance.checkpoint() API) before work and on rollback truncate or
restore to that checkpoint (e.g., provenance.truncate_to(version)) instead of
cloning the whole store, and avoid cloning runtime by snapshotting only the
minimal mutable fields needed for rollback or exposing a lightweight
runtime.checkpoint()/restore() operation; update the SuperTick rollback logic
that currently references provenance_before and runtime_before to use these
checkpoint/truncate APIs.

In `@crates/warp-core/src/observation.rs`:
- Around line 92-121: The match arm with unreachable!() in
ObservationProjectionKind::to_abi assumes the caller always provides a matching
ObservationProjection for the kind; make that invariant explicit and provide a
clearer failure mode: add a short doc comment on the to_abi method stating it is
a programmer-only helper that requires matching variants, then replace
unreachable!() with a debug_assert!(matches!(projection, <expected pattern>))
(or a panic! with a descriptive message) so failures produce a clear diagnostic;
reference the ObservationProjectionKind::to_abi function and the unreachable!()
branch when making the change.

In `@crates/warp-core/src/playback.rs`:
- Around line 457-469: The code currently calls
provenance.parents(self.worldline_id, patch_tick) after already fetching entry
via provenance.entry(...), causing a redundant lookup; replace that second call
by using the parents field on the retrieved entry (the local variable entry)
instead of calling provenance.parents again, e.g. extract commit hashes from
entry.parents (or entry.parents.clone() if ownership is needed) with
.into_iter().map(|parent| parent.commit_hash).collect() so you avoid the
duplicate provenance access while preserving the same error handling and result
shape.

In `@crates/warp-core/src/provenance_store.rs`:
- Around line 975-988: The code currently returns
BtrError::OutputBoundaryHashMismatch when a stored entry does not equal the
incoming entry in the loop over record.payload.entries; change this to a
semantically correct error by adding a new enum variant (e.g.
BtrError::PayloadEntryMismatch { tick: u64 }) and return that from the mismatch
branch in the function that iterates record.payload.entries and calls
self.store.entry; update the BtrError enum (and any match sites) to include the
new PayloadEntryMismatch variant and populate it with the offending
entry/worldline_tick (or include expected/got fields if desired) instead of
using BtrError::OutputBoundaryHashMismatch.

In `@crates/warp-core/tests/inbox.rs`:
- Around line 65-73: The helper mirrored_provenance() only re-registers
worldlines via ProvenanceService::register_worldline and therefore misleads by
its name; either rename it to something like register_worldlines_only or extend
it to truly mirror committed provenance by iterating the runtime's provenance
(e.g., WorldlineRuntime::provenance or its entries API) and copying committed
entries, parent relationships, and any recorded outputs into the new
ProvenanceService (using the appropriate ProvenanceService methods for
registering entries, parents, and outputs) so callers receive a faithful mirror
of provenance state.

In `@docs/plans/adr-0008-and-0009.md`:
- Around line 10-12: The "Companion ADRs in this change set" header lists
ADR-0011 but the "Recommended File Layout" section only references ADR-0010 and
adr-exceptions.md, creating an internal inconsistency; update the plan so the
two places match by either adding
`docs/adr/ADR-0011-explicit-observation-contract.md` to the Recommended File
Layout or removing ADR-0011 from the companion ADR list, and ensure the
"Recommended File Layout" and the header (and any mentions of
`ADR-0010-observational-seek-and-administrative-rewind.md`,
`ADR-0011-explicit-observation-contract.md`, and `adr-exceptions.md`) are
consistent and deterministic in ordering.
- Around line 614-615: The docs currently document a non-existent free function
signature `pub fn observe(request: ObservationRequest) ->
Result<ObservationArtifact, ObservationError>`; update the documentation to
document the actual entrypoint `ObservationService::observe(&runtime,
&provenance, &engine, request)` (i.e., the method on ObservationService that
accepts runtime, provenance, engine, and the ObservationRequest and returns
Result<ObservationArtifact, ObservationError>), and adjust examples and any
cross-references to use `ObservationService::observe` and its correct parameter
order and types so readers are pointed at the real public API.

In `@docs/spec/SPEC-0009-wasm-abi-v1.md`:
- Around line 235-238: The “Not Yet Implemented” section currently lumps
endpoints under error code 5 (NOT_SUPPORTED) but `execute_query` is specified
elsewhere to return 11 (UNSUPPORTED_QUERY); update this section to be
endpoint-specific: change the `execute_query` entry to document error code 11
(UNSUPPORTED_QUERY) and keep `render_snapshot` (and any other endpoints here) as
error code 5 (NOT_SUPPORTED), ensuring the docs reference the exact error symbol
names (`execute_query`, `render_snapshot`, error code 5/NOT_SUPPORTED, error
code 11/UNSUPPORTED_QUERY) so the contract is unambiguous.

---

Outside diff comments:
In `@crates/ttd-browser/src/lib.rs`:
- Around line 482-490: The current code fetches provenance.entry(...) and then
returns expected.patch_digest as a proxy; instead compute the real emissions
digest from the recorded MBUS outputs on that provenance entry (use the same
serialization + hashing logic used in commit_inner) instead of using
expected.patch_digest. Locate the call that maps the entry
(provenance.entry(cursor.worldline_id, cursor.tick - 1)) and, using the
entry.outputs field, reproduce the emissions hash computation from commit_inner
(serialize the outputs deterministically and run the same hash function), then
return hash_to_uint8array(&emissions_digest) so get_emissions_digest() matches
commit_inner’s digest.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 20b1730b-4658-43a8-b51c-5d032fd86197

📥 Commits

Reviewing files that changed from the base of the PR and between 7e55ac0 and dcb5f65.

📒 Files selected for processing (28)
  • CHANGELOG.md
  • crates/echo-wasm-abi/src/kernel_port.rs
  • crates/ttd-browser/src/lib.rs
  • crates/warp-core/Cargo.toml
  • crates/warp-core/README.md
  • crates/warp-core/src/coordinator.rs
  • crates/warp-core/src/lib.rs
  • crates/warp-core/src/materialization/reduce_op.rs
  • crates/warp-core/src/observation.rs
  • crates/warp-core/src/playback.rs
  • crates/warp-core/src/provenance_store.rs
  • crates/warp-core/src/snapshot.rs
  • crates/warp-core/src/worldline.rs
  • crates/warp-core/tests/checkpoint_fork_tests.rs
  • crates/warp-core/tests/common/mod.rs
  • crates/warp-core/tests/golden_vectors_phase0.rs
  • crates/warp-core/tests/inbox.rs
  • crates/warp-core/tests/invariant_property_tests.rs
  • crates/warp-core/tests/outputs_playback_tests.rs
  • crates/warp-core/tests/playback_cursor_tests.rs
  • crates/warp-core/tests/slice_theorem_proof.rs
  • crates/warp-wasm/src/lib.rs
  • crates/warp-wasm/src/warp_kernel.rs
  • docs/adr/ADR-0011-explicit-observation-contract.md
  • docs/plans/adr-0008-and-0009.md
  • docs/spec/SPEC-0004-worldlines-playback-truthbus.md
  • docs/spec/SPEC-0005-provenance-payload.md
  • docs/spec/SPEC-0009-wasm-abi-v1.md

Comment on lines +51 to +68
/// Legacy snapshot/history tick index is out of bounds.
pub const LEGACY_INVALID_TICK: u32 = 4;
/// The requested operation is not yet supported by this kernel.
pub const NOT_SUPPORTED: u32 = 5;
/// CBOR encoding or decoding failed.
pub const CODEC_ERROR: u32 = 6;
/// The provided payload bytes were invalid or corrupted.
pub const INVALID_PAYLOAD: u32 = 7;
/// The requested worldline is not registered.
pub const INVALID_WORLDLINE: u32 = 8;
/// The requested observation tick is not available.
pub const INVALID_TICK: u32 = 9;
/// The requested frame/projection pairing is invalid.
pub const UNSUPPORTED_FRAME_PROJECTION: u32 = 10;
/// Query observation is not implemented yet.
pub const UNSUPPORTED_QUERY: u32 = 11;
/// The requested observation cannot be produced at this coordinate.
pub const OBSERVATION_UNAVAILABLE: u32 = 12;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error code renumbering may break ABI consumers.

INVALID_TICK moved from 4 to 9. Code 4 is now LEGACY_INVALID_TICK. JS callers matching on error code 4 for tick validation will silently get different behavior when the new INVALID_TICK (9) is returned. Document this in migration notes or bump ABI_VERSION if this is intentional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/echo-wasm-abi/src/kernel_port.rs` around lines 51 - 68, The numeric
value for INVALID_TICK changed (was 4, now 9) and LEGACY_INVALID_TICK now uses 4
which can break ABI consumers; either restore INVALID_TICK to its original value
or treat this as an intentional ABI change by incrementing the ABI version and
adding migration notes. Concretely, update the constants in kernel_port.rs so
INVALID_TICK has the previous code (4) and move legacy naming elsewhere, or if
you intend the renumbering, bump the ABI_VERSION constant and add a clear
migration note in the release/migration docs and notify JS callers to handle
LEGACY_INVALID_TICK vs INVALID_TICK mappings; reference the symbols
INVALID_TICK, LEGACY_INVALID_TICK and ABI_VERSION when making the change.

Comment on lines +1375 to +1391
let head_key = WriterHeadKey {
worldline_id: wl_id,
head_id: make_head_id("ttd-browser-test"),
};
let entry = ProvenanceEntry::local_commit(
wl_id,
0,
0,
head_key,
Vec::new(),
expected,
patch,
outputs,
Vec::new(),
);

engine
.provenance
.append_with_writes(wl_id, patch, expected, outputs, vec![])
.unwrap();
engine.provenance.append_local_commit(entry).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This fixture migration still doesn’t assert the emissions_digest field.

After switching the test to ProvenanceEntry::local_commit(...), the regression still passes if commit_inner() writes an all-zero or stale digest, because the only postcondition downstream is “receipt bytes are non-empty.” Decode the TTDR frame and compare header.emissions_digest against compute_emissions_digest(...) for outputs so this actually guards the regression.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/ttd-browser/src/lib.rs` around lines 1375 - 1391, The test currently
constructs a ProvenanceEntry via ProvenanceEntry::local_commit and appends it
with engine.provenance.append_local_commit(entry) but never asserts the
emissions_digest; decode the resulting receipt bytes into a TTDR frame (parse
the receipt/TTDR header produced by commit_inner), compute the expected digest
with compute_emissions_digest(outputs) and add an assert that
header.emissions_digest == compute_emissions_digest(&outputs); ensure you locate
this check immediately after the append_local_commit call so it fails if
commit_inner writes an all-zero or stale digest.

Comment on lines 353 to 355
let runtime_before = runtime.clone();
let provenance_before = provenance.clone();

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid cloning the entire provenance store on every SuperTick.

provenance.clone() runs even on successful ticks, so the steady-state scheduler now copies all retained provenance entries before doing any work. That makes SuperTick cost grow with total history length instead of just the work admitted this pass. Please switch rollback to a cheap checkpoint/truncate strategy instead of cloning the full store in the hot path.

As per coding guidelines, crates/warp-core/**: warp-core is the deterministic kernel. Performance matters — flag unnecessary allocations in hot paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/src/coordinator.rs` around lines 353 - 355, The current
SuperTick hot path clones the entire provenance store (provenance.clone()) into
provenance_before (and similarly clones runtime into runtime_before), causing
heavy allocations; change rollback to take a cheap checkpoint (e.g., record a
provenance version/length or call a lightweight provenance.checkpoint() API)
before work and on rollback truncate or restore to that checkpoint (e.g.,
provenance.truncate_to(version)) instead of cloning the whole store, and avoid
cloning runtime by snapshotting only the minimal mutable fields needed for
rollback or exposing a lightweight runtime.checkpoint()/restore() operation;
update the SuperTick rollback logic that currently references provenance_before
and runtime_before to use these checkpoint/truncate APIs.

Comment on lines +92 to +121
impl ObservationProjectionKind {
fn to_abi(self, projection: &ObservationProjection) -> abi::ObservationProjection {
match (self, projection) {
(Self::Head, ObservationProjection::Head) => abi::ObservationProjection::Head,
(Self::Snapshot, ObservationProjection::Snapshot) => {
abi::ObservationProjection::Snapshot
}
(Self::TruthChannels, ObservationProjection::TruthChannels { channels }) => {
abi::ObservationProjection::TruthChannels {
channels: channels.as_ref().map(|ids| {
ids.iter()
.map(|channel| channel.0.to_vec())
.collect::<Vec<_>>()
}),
}
}
(
Self::Query,
ObservationProjection::Query {
query_id,
vars_bytes,
},
) => abi::ObservationProjection::Query {
query_id: *query_id,
vars_bytes: vars_bytes.clone(),
},
_ => unreachable!("projection kind and projection must agree"),
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider removing the unreachable branch or documenting the invariant.

Line 118 uses unreachable!() assuming the caller always passes matching kind and projection. This is an internal helper, so the panic is acceptable, but the comment could be stronger to indicate this is a programmer error if hit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/src/observation.rs` around lines 92 - 121, The match arm
with unreachable!() in ObservationProjectionKind::to_abi assumes the caller
always provides a matching ObservationProjection for the kind; make that
invariant explicit and provide a clearer failure mode: add a short doc comment
on the to_abi method stating it is a programmer-only helper that requires
matching variants, then replace unreachable!() with a
debug_assert!(matches!(projection, <expected pattern>)) (or a panic! with a
descriptive message) so failures produce a clear diagnostic; reference the
ObservationProjectionKind::to_abi function and the unreachable!() branch when
making the change.

Comment on lines +457 to +469
let entry = provenance
.entry(self.worldline_id, patch_tick)
.map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?;
let patch = entry
.patch
.ok_or(SeekError::HistoryUnavailable { tick: patch_tick })?;
let expected = entry.expected;
let parents = provenance
.parents(self.worldline_id, patch_tick)
.map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?
.into_iter()
.map(|parent| parent.commit_hash)
.collect::<Vec<_>>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Redundant provenance lookup in seek hot path.

provenance.parents() (line 464) internally calls entry() again, doubling the provenance access per tick. The entry variable already contains the parents field.

Proposed fix
             let entry = provenance
                 .entry(self.worldline_id, patch_tick)
                 .map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?;
             let patch = entry
                 .patch
                 .ok_or(SeekError::HistoryUnavailable { tick: patch_tick })?;
             let expected = entry.expected;
-            let parents = provenance
-                .parents(self.worldline_id, patch_tick)
-                .map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?
-                .into_iter()
+            let parents = entry
+                .parents
+                .iter()
                 .map(|parent| parent.commit_hash)
                 .collect::<Vec<_>>();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let entry = provenance
.entry(self.worldline_id, patch_tick)
.map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?;
let patch = entry
.patch
.ok_or(SeekError::HistoryUnavailable { tick: patch_tick })?;
let expected = entry.expected;
let parents = provenance
.parents(self.worldline_id, patch_tick)
.map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?
.into_iter()
.map(|parent| parent.commit_hash)
.collect::<Vec<_>>();
let entry = provenance
.entry(self.worldline_id, patch_tick)
.map_err(|_| SeekError::HistoryUnavailable { tick: patch_tick })?;
let patch = entry
.patch
.ok_or(SeekError::HistoryUnavailable { tick: patch_tick })?;
let expected = entry.expected;
let parents = entry
.parents
.iter()
.map(|parent| parent.commit_hash)
.collect::<Vec<_>>();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/src/playback.rs` around lines 457 - 469, The code currently
calls provenance.parents(self.worldline_id, patch_tick) after already fetching
entry via provenance.entry(...), causing a redundant lookup; replace that second
call by using the parents field on the retrieved entry (the local variable
entry) instead of calling provenance.parents again, e.g. extract commit hashes
from entry.parents (or entry.parents.clone() if ownership is needed) with
.into_iter().map(|parent| parent.commit_hash).collect() so you avoid the
duplicate provenance access while preserving the same error handling and result
shape.

Comment on lines +65 to +73
fn mirrored_provenance(runtime: &WorldlineRuntime) -> ProvenanceService {
let mut provenance = ProvenanceService::new();
for (worldline_id, frontier) in runtime.worldlines().iter() {
provenance
.register_worldline(*worldline_id, frontier.state())
.unwrap();
}
provenance
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Rename this helper or make it copy committed entries too.

mirrored_provenance() only re-registers worldlines. It does not mirror existing provenance entries, parents, or recorded outputs, so the name overstates what callers receive. The current tests are safe because they call it before the first commit, but this is easy to misuse later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/tests/inbox.rs` around lines 65 - 73, The helper
mirrored_provenance() only re-registers worldlines via
ProvenanceService::register_worldline and therefore misleads by its name; either
rename it to something like register_worldlines_only or extend it to truly
mirror committed provenance by iterating the runtime's provenance (e.g.,
WorldlineRuntime::provenance or its entries API) and copying committed entries,
parent relationships, and any recorded outputs into the new ProvenanceService
(using the appropriate ProvenanceService methods for registering entries,
parents, and outputs) so callers receive a faithful mirror of provenance state.

Comment on lines +10 to +12
- **Companion ADRs in this change set:**
- `docs/adr/ADR-0010-observational-seek-and-administrative-rewind.md`
- `docs/adr/ADR-0011-explicit-observation-contract.md`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the plan’s file-layout guidance in sync with the new ADR list.

This header now says ADR-0011 is part of the change set, but the Recommended File Layout section below still lists only ADR-0010 and adr-exceptions.md. That leaves the plan internally inconsistent about which companion ADRs belong in this phase.

As per coding guidelines, docs/**/*.md: Echo is a deterministic simulation engine. Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/adr-0008-and-0009.md` around lines 10 - 12, The "Companion ADRs in
this change set" header lists ADR-0011 but the "Recommended File Layout" section
only references ADR-0010 and adr-exceptions.md, creating an internal
inconsistency; update the plan so the two places match by either adding
`docs/adr/ADR-0011-explicit-observation-contract.md` to the Recommended File
Layout or removing ADR-0011 from the companion ADR list, and ensure the
"Recommended File Layout" and the header (and any mentions of
`ADR-0010-observational-seek-and-administrative-rewind.md`,
`ADR-0011-explicit-observation-contract.md`, and `adr-exceptions.md`) are
consistent and deterministic in ordering.

Comment on lines +614 to 615
pub fn observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>;
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the shipped ObservationService::observe(...) entrypoint, not a free function.

The public surface in crates/warp-core/src/lib.rs re-exports ObservationService, and the adapter code in crates/warp-wasm/src/warp_kernel.rs now calls ObservationService::observe(&self.runtime, &self.provenance, &self.engine, request). Keeping Phase 5’s contract as pub fn observe(request) points readers at an API that does not exist.

Suggested doc fix
-pub fn observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>;
+pub struct ObservationService;
+
+impl ObservationService {
+    pub fn observe(
+        runtime: &WorldlineRuntime,
+        provenance: &ProvenanceService,
+        engine: &Engine,
+        request: ObservationRequest,
+    ) -> Result<ObservationArtifact, ObservationError>;
+}
As per coding guidelines, `docs/**/*.md`: Echo is a deterministic simulation engine. Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>;
```
pub struct ObservationService;
impl ObservationService {
pub fn observe(
runtime: &WorldlineRuntime,
provenance: &ProvenanceService,
engine: &Engine,
request: ObservationRequest,
) -> Result<ObservationArtifact, ObservationError>;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/plans/adr-0008-and-0009.md` around lines 614 - 615, The docs currently
document a non-existent free function signature `pub fn observe(request:
ObservationRequest) -> Result<ObservationArtifact, ObservationError>`; update
the documentation to document the actual entrypoint
`ObservationService::observe(&runtime, &provenance, &engine, request)` (i.e.,
the method on ObservationService that accepts runtime, provenance, engine, and
the ObservationRequest and returns Result<ObservationArtifact,
ObservationError>), and adjust examples and any cross-references to use
`ObservationService::observe` and its correct parameter order and types so
readers are pointed at the real public API.

Comment on lines 235 to 238
These are honestly reported as `NOT_SUPPORTED` (error code 5):

- `execute_query`: Engine query dispatcher not yet built.
- `execute_query`: Lowered through `observe(...)`, but real query evaluation is not yet built.
- `render_snapshot`: Snapshot-to-ViewOps projection not yet built.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix contradictory error semantics in “Not Yet Implemented.”

This section declares error code 5 for all listed items, but execute_query is documented elsewhere in this file as returning 11 (UNSUPPORTED_QUERY). Keep this section endpoint-specific to avoid contract ambiguity.

🛠️ Suggested doc fix
-These are honestly reported as `NOT_SUPPORTED` (error code 5):
+These are honestly reported as transitional, endpoint-specific errors:
 
-- `execute_query`: Lowered through `observe(...)`, but real query evaluation is not yet built.
+- `execute_query`: Lowered through `observe(...)`; returns `UNSUPPORTED_QUERY` (error code 11) until real query evaluation lands.
 - `render_snapshot`: Snapshot-to-ViewOps projection not yet built.
+  Returns `NOT_SUPPORTED` (error code 5).

As per coding guidelines "docs/**/*.md: Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
These are honestly reported as `NOT_SUPPORTED` (error code 5):
- `execute_query`: Engine query dispatcher not yet built.
- `execute_query`: Lowered through `observe(...)`, but real query evaluation is not yet built.
- `render_snapshot`: Snapshot-to-ViewOps projection not yet built.
These are honestly reported as transitional, endpoint-specific errors:
- `execute_query`: Lowered through `observe(...)`; returns `UNSUPPORTED_QUERY` (error code 11) until real query evaluation lands.
- `render_snapshot`: Snapshot-to-ViewOps projection not yet built.
Returns `NOT_SUPPORTED` (error code 5).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/spec/SPEC-0009-wasm-abi-v1.md` around lines 235 - 238, The “Not Yet
Implemented” section currently lumps endpoints under error code 5
(NOT_SUPPORTED) but `execute_query` is specified elsewhere to return 11
(UNSUPPORTED_QUERY); update this section to be endpoint-specific: change the
`execute_query` entry to document error code 11 (UNSUPPORTED_QUERY) and keep
`render_snapshot` (and any other endpoints here) as error code 5
(NOT_SUPPORTED), ensuring the docs reference the exact error symbol names
(`execute_query`, `render_snapshot`, error code 5/NOT_SUPPORTED, error code
11/UNSUPPORTED_QUERY) so the contract is unambiguous.

@flyingrobots
Copy link
Owner Author

@coderabbitai review update: fixed the Phase 4/5 provenance integrity issues in d30d6d5.

Included in this commit:

  • append_local_commit(...) now rejects non-LocalCommit event kinds
  • parent refs must resolve to stored history and match the referenced commit hash exactly
  • BTR validation now reports payload/store divergence as EntryMismatch instead of overloading boundary-hash errors
  • ADR-0011 status now matches the implemented observation contract

Local verification passed for the touched surface:

  • cargo test -p warp-core provenance_store::tests --lib
  • cargo clippy -p warp-core --lib -- -D warnings -D missing_docs
  • cargo fmt --all -- --check
  • git diff --check

I am following up with a separate local-dev throughput pass next, focused on faster multi-lane verification for iteration speed.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 15, 2026

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
git show d30d6d5 -- crates/warp-core/src/provenance_store.rs docs/adr/ADR-0011-explicit-observation-contract.md

Length of output: 29


🏁 Script executed:

#!/bin/bash
git fetch origin feat/adr-0008-0009-phase-4 2>/dev/null
git log --oneline -10 origin/feat/adr-0008-0009-phase-4

Length of output: 213


🏁 Script executed:

#!/bin/bash
git remote -v
git branch -a
git log --oneline -15

Length of output: 1025


🏁 Script executed:

#!/bin/bash
git fetch origin pull/302/head:pr302
git log --oneline -10 pr302
git show pr302 --stat

Length of output: 977


🏁 Script executed:

#!/bin/bash
git fetch origin pull/302/head:pr302_latest --force
git log --oneline -5 pr302_latest
git show pr302_latest --stat

Length of output: 732


@flyingrobots — commit d30d6d5 isn't on the PR branch yet (current tip is dcb5f65). Once it's pushed I'll re-verify, but based on the described changes:

append_local_commit event-kind guard — rejecting non-LocalCommit variants at the boundary is the right fix. This closes the invariant hole where a caller could accidentally append a Checkpoint or Fork event kind and corrupt the history stream silently.

Parent ref resolution + commit hash validation — requiring parent refs to resolve against stored history and match the referenced commit hash exactly is the correct level of strictness here. A ref that resolves to a different hash is a silent lie in the provenance chain; good catch.

EntryMismatch vs. boundary-hash overloading in BTR validation — overloading boundary-hash errors for payload/store divergence was genuinely misleading. A distinct EntryMismatch variant makes the error actionable without requiring callers to reverse-engineer which condition actually fired.

ADR-0011 status alignment — should be a no-brainer, but mismatched ADR statuses are a real maintenance hazard. Good to close it out.

Push d30d6d5 when ready and I'll do a pass over the actual diff to confirm the implementation matches the stated contract. Specifically I'll want to verify:

  • The event-kind guard fires before any mutation (not after a partial write).
  • Parent ref resolution uses == on the stored commit_hash, not a prefix or hash-prefix comparison.
  • EntryMismatch carries enough context (worldline, tick, expected vs. actual hashes) to be useful in a BTR failure log.

🧠 Learnings used
Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 298
File: crates/warp-core/src/provenance_store.rs:387-398
Timestamp: 2026-03-09T07:57:39.706Z
Learning: In flyingrobots/echo, `LocalProvenanceStore::atom_history()` in `crates/warp-core/src/provenance_store.rs` intentionally uses an O(history_length) backward scan with `out_slots` filtering and early termination at creation. It is a diagnostic "Show Me Why" TTD query (not a per-tick simulation hot path), so the overhead is acceptable. A per-slot reverse index was explicitly deferred to profiling. Do not flag this scan as a performance issue in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T20:33:08.438Z
Learning: In flyingrobots/echo `crates/warp-core/src/worldline_state.rs`, `committed_ingress` is an explicit Phase 3 process-lifetime dedupe ledger. Bounding or persisting it would change duplicate-suppression semantics and the storage model. The contract is documented inline in `worldline_state.rs`. Do not flag unbounded growth of `committed_ingress` as a bug in Phase 3 reviews; it is deferred follow-up design work.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/head_inbox.rs`, `HeadInbox::ingest()` enforces the canonical content hash (`ingress_id`) in **release builds** (not just `debug_assert`). `IngressEnvelope` fields are private; the hash invariant is upheld at the boundary in all build configurations. Do not flag this as insufficient in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/head.rs`, `HeadId` is opaque (no public inner field); `heads_for_worldline()` uses BTreeMap range bounds internally via the opaque key. `WriterHead` derives pause state from `PlaybackMode`; `unpause(PlaybackMode::Paused)` fails deterministically in all build configurations (not just debug). Do not flag these as design gaps in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T20:33:08.438Z
Learning: In flyingrobots/echo `crates/warp-wasm/src/warp_kernel.rs`, `WarpKernel` is single-writer in the Phase 3 WASM ABI: the kernel only exposes the default worldline and default writer head, so one call to `SchedulerCoordinator::super_tick()` produces at most one committed `StepRecord`. This assumption is documented with an inline note in `warp_kernel.rs`. Do not flag the lack of a `budget` parameter or multi-writer scheduling as a gap in Phase 3 reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/coordinator.rs`, `SchedulerCoordinator::peek_order()` derives runnable order from the head registry directly. Tick counters use checked arithmetic and fail deterministically on overflow. Do not flag these as stale-cache or silent-overflow risks in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 288
File: CHANGELOG.md:10-17
Timestamp: 2026-03-03T22:59:26.903Z
Learning: When verifying CHANGELOG claims about "fixing stale references" in the flyingrobots/echo repo, always inspect the relevant git commit diff (e.g., `git show <sha>`) rather than grepping the current working tree. The references will already be gone post-fix, which can produce false-positive "never existed" conclusions if only the current state is checked.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 300
File: crates/warp-core/src/coordinator.rs:36-42
Timestamp: 2026-03-11T04:29:43.838Z
Learning: In flyingrobots/echo, `super_tick()` in `crates/warp-core/src/coordinator.rs` intentionally uses `expect()` to panic on invariant violations (runnable head referencing a worldline not in the registry). This is a programmer error, not a recoverable runtime condition. The panic is documented in the `Panics` section (lines ~103-106). Do not flag this as a violation of the no-unwrap/expect rule in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 300
File: crates/warp-core/src/head.rs:0-0
Timestamp: 2026-03-11T04:29:30.253Z
Learning: In flyingrobots/echo (crates/warp-core and sibling crates), `#[allow(clippy::unwrap_used)]` on `#[cfg(test)]` modules is the established, codebase-wide convention. Test code uses `.unwrap()` extensively for brevity and crash-on-failure semantics. Do not flag `clippy::unwrap_used` suppressions on test modules in this codebase in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-01-24T23:22:15.388Z
Learning: The user (flyingrobots) prefers extremely detailed, pedantic code reviews with no mercy — "GOD-TIER NIGHTMARE CODE REVIEWER" style. They want critical scrutiny, exhaustive analysis, identification of landmines, design questions, and actionable demands on every review.
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@flyingrobots
Copy link
Owner Author

@coderabbitai local-dev throughput update in 7869932.

This change makes local verification explicitly lane-based instead of one giant serialized cargo pass:

  • scripts/verify-local.sh now fans the local full gate out into isolated CARGO_TARGET_DIR lanes for clippy, tests, rustdoc, and guard checks
  • staged/reduced local Rust checks now use a narrower fast-path clippy surface for heavy crates like warp-core
  • make verify-full-sequential is available as a fallback/debug path
  • hook regressions now include fake-cargo coverage proving lane fan-out and the narrowed fast warp-core clippy scope

First real push after the change completed the local full gate in 1m11s on this branch, versus the earlier multi-minute monolithic path that was serializing on one target dir and dragging warp-core --all-targets through local hooks.

Branch head: 7869932.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 15, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
git fetch origin pull/302/head:pr302_v2 --force 2>/dev/null
git log --oneline -8 pr302_v2
git show 7869932 --stat 2>/dev/null || git log --oneline pr302_v2 | head -5

Length of output: 999


🏁 Script executed:

#!/bin/bash
git show 7869932 -- scripts/verify-local.sh

Length of output: 12093


🏁 Script executed:

#!/bin/bash
git show 7869932 -- Makefile tests/hooks/test_verify_local.sh scripts/hooks/README.md

Length of output: 8489


@flyingrobots — diff reviewed. The architecture is clean and the fake-cargo regression coverage is solid. A few issues worth flagging:


1. run_full_lane_clippy_core skips --tests on determinism-critical crates

run_full_lane_clippy_core() {
  lane_cargo "full-clippy-core" clippy "${args[@]}" --lib -- -D warnings -D missing_docs
}

warp-core, warp-wasm, echo-wasm-abi, and warp-geom get --lib only in the full local gate. A lint warning introduced in a test file in any of these crates will pass the full local gate and fail CI (--all-targets + -D warnings). The README explicitly calls this out as intentional ("heavy all-target clippy coverage stays in CI"), but the gap is real and will generate CI noise that can't be reproduced locally without dropping to make verify-full-sequential and manually patching the scope. Whether that's acceptable is a workflow call, but it's the sharpest edge in this change.

2. No SIGINT/SIGTERM trap — logdir leaks on Ctrl+C

logdir is created with mktemp -d and cleaned up at the end of run_parallel_lanes, but there's no trap:

logdir="$(mktemp -d "${TMPDIR:-/tmp}/verify-local-${suite}.XXXXXX")"
# ... background pids, wait loop ...
rm -rf "$logdir"  # only reached on clean exit

If the developer hits Ctrl+C during a parallel run, the temp dir survives under /tmp/verify-local-full.*. Add a trap before the background launches:

trap 'rm -rf "$logdir"' EXIT

That fires on EXIT regardless of signal. The explicit rm -rf "$logdir" calls can stay or be removed; they'll be no-ops after EXIT fires.

3. Failed lane with no output produces a silent tombstone

if [[ ! -s "$logfile" ]]; then
  continue
fi

A lane that exits non-zero without printing anything (e.g., cargo fmt finds nothing to report and exits non-zero on a bad internal state, or run_pattern_guards succeeds silently) will only produce [verify-local] lane failed: <name> with no further context. Probably fine in practice since cargo is verbose, but worth noting for future lane additions that might be deliberately quiet.

4. VERIFY_LANE_MODE undocumented behavior for arbitrary values

should_run_parallel_lanes() {
  [[ "$VERIFY_LANE_MODE" == "parallel" ]]
}

Any value other than "parallel" silently falls through to sequential. VERIFY_LANE_MODE=wat ./scripts/verify-local.sh full will behave identically to verify-full-sequential with no warning. Low risk since the Makefile exposes only two named paths, but documenting the two valid values in the variable block at the top of the script would make the contract explicit.


What's solid:

  • Lane isolation via CARGO_TARGET_DIR is the right fix — target-dir lock contention was the actual bottleneck.
  • run_full_lane_fmt and run_full_lane_guards correctly don't use lane_cargo (no artifacts, no contention).
  • Fake-cargo regression tests are genuinely useful — verifying lane count (9), target dir isolation, and the narrowed warp-core clippy scope in a single test is the right level of coverage for tooling infrastructure.
  • Sequential fallback path via make verify-full-sequential and VERIFY_LANE_MODE=sequential covers the "debug the lane runner" case cleanly.
  • run_parallel_lanes wait-loop with set +e + failure accumulation correctly collects all lane results before reporting.

Items 2 and 4 are cheap fixes. Item 1 is the intentional trade-off worth being explicit about in a comment near the FULL_LOCAL_CLIPPY_CORE_PACKAGES definition.


🧠 Learnings used
Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 298
File: crates/warp-core/src/provenance_store.rs:387-398
Timestamp: 2026-03-09T07:57:39.706Z
Learning: In flyingrobots/echo, `LocalProvenanceStore::atom_history()` in `crates/warp-core/src/provenance_store.rs` intentionally uses an O(history_length) backward scan with `out_slots` filtering and early termination at creation. It is a diagnostic "Show Me Why" TTD query (not a per-tick simulation hot path), so the overhead is acceptable. A per-slot reverse index was explicitly deferred to profiling. Do not flag this scan as a performance issue in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T20:33:08.438Z
Learning: In flyingrobots/echo `crates/warp-core/src/worldline_state.rs`, `committed_ingress` is an explicit Phase 3 process-lifetime dedupe ledger. Bounding or persisting it would change duplicate-suppression semantics and the storage model. The contract is documented inline in `worldline_state.rs`. Do not flag unbounded growth of `committed_ingress` as a bug in Phase 3 reviews; it is deferred follow-up design work.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/head_inbox.rs`, `HeadInbox::ingest()` enforces the canonical content hash (`ingress_id`) in **release builds** (not just `debug_assert`). `IngressEnvelope` fields are private; the hash invariant is upheld at the boundary in all build configurations. Do not flag this as insufficient in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/head.rs`, `HeadId` is opaque (no public inner field); `heads_for_worldline()` uses BTreeMap range bounds internally via the opaque key. `WriterHead` derives pause state from `PlaybackMode`; `unpause(PlaybackMode::Paused)` fails deterministically in all build configurations (not just debug). Do not flag these as design gaps in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T20:33:08.438Z
Learning: In flyingrobots/echo `crates/warp-wasm/src/warp_kernel.rs`, `WarpKernel` is single-writer in the Phase 3 WASM ABI: the kernel only exposes the default worldline and default writer head, so one call to `SchedulerCoordinator::super_tick()` produces at most one committed `StepRecord`. This assumption is documented with an inline note in `warp_kernel.rs`. Do not flag the lack of a `budget` parameter or multi-writer scheduling as a gap in Phase 3 reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-03-12T09:30:36.476Z
Learning: In flyingrobots/echo `crates/warp-core/src/coordinator.rs`, `SchedulerCoordinator::peek_order()` derives runnable order from the head registry directly. Tick counters use checked arithmetic and fail deterministically on overflow. Do not flag these as stale-cache or silent-overflow risks in future reviews.

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 300
File: crates/warp-core/src/head.rs:0-0
Timestamp: 2026-03-11T04:29:30.253Z
Learning: In flyingrobots/echo (crates/warp-core and sibling crates), `#[allow(clippy::unwrap_used)]` on `#[cfg(test)]` modules is the established, codebase-wide convention. Test code uses `.unwrap()` extensively for brevity and crash-on-failure semantics. Do not flag `clippy::unwrap_used` suppressions on test modules in this codebase in future reviews.

Learnt from: CR
Repo: flyingrobots/echo PR: 0
File: docs/archive/AGENTS.md:0-0
Timestamp: 2026-03-14T07:42:53.357Z
Learning: Applies to docs/archive/crates/{warp-core,echo-wasm-abi,echo-scene-port}/**/*.rs : Determinism-critical crates (warp-core, echo-wasm-abi, echo-scene-port) are compiled with RUSTFLAGS="-Dwarnings"; unused imports, dead code, or silenced lints are treated as build failures

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 283
File: .github/workflows/det-gates.yml:28-34
Timestamp: 2026-02-15T20:20:31.833Z
Learning: In the det-gates.yml workflow for the flyingrobots/echo repository, the push-to-main path uses `HEAD~1..HEAD` for change detection. The primary gate enforcement happens on PRs; main pushes serve as a safety net only. The workflow assumes squash-merge to main, where `HEAD~1` correctly captures all changes in the squash commit.

Learnt from: CR
Repo: flyingrobots/echo PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-07T19:39:32.763Z
Learning: Run cargo clippy --all-targets -- -D missing_docs and cargo test before every PR; CI will expect zero-warning, fully documented surface

Learnt from: CR
Repo: flyingrobots/echo PR: 0
File: docs/archive/AGENTS.md:0-0
Timestamp: 2026-03-14T07:42:53.357Z
Learning: Applies to docs/archive/crates/**/*.rs : Tests and benchmarks are mandatory for runtime changes once the harness exists

Learnt from: CR
Repo: flyingrobots/echo PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-07T19:39:32.763Z
Learning: Applies to crates/{warp-core,echo-wasm-abi,echo-scene-port}/**/*.rs : All determinism-critical crates (warp-core, echo-wasm-abi, echo-scene-port) must be compiled with RUSTFLAGS="-Dwarnings"; unused imports, dead code, or silenced lints are treated as build failures

Learnt from: flyingrobots
Repo: flyingrobots/echo PR: 0
File: :0-0
Timestamp: 2026-01-24T23:22:15.388Z
Learning: The user (flyingrobots) prefers extremely detailed, pedantic code reviews with no mercy — "GOD-TIER NIGHTMARE CODE REVIEWER" style. They want critical scrutiny, exhaustive analysis, identification of landmines, design questions, and actionable demands on every review.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/warp-core/src/provenance_store.rs`:
- Around line 758-769: The fork implementation currently copies
source_history.entries[..end_idx] as-is into the new WorldlineHistory which
preserves source-owned metadata (worldline_id, head_key/tip refs, parent refs)
causing entry(new_id, 0), tip_ref(new_id) and BtrPayload::validate() to fail;
fix by materializing a proper target-owned history in fork(): iterate the
entries being copied, clone each ProvenanceEntry but remap any worldline-local
metadata to the new worldline (set worldline_id -> new_id, rewrite head_key/tip
refs to point at the corresponding entry IDs in the new sequence, and remap
parent refs to the new entry IDs), update checkpoints if they reference old
ticks/refs, then insert that remapped Vec into WorldlineHistory.entries before
calling self.worldlines.insert(new_id, new_history) so build_btr(new_id, ..) and
validate succeed.
- Around line 347-349: The tick arithmetic uses unchecked additions and must use
checked_add to avoid release/debug divergence: change end_tick_exclusive() to
return a Result<u64, BtrError> (or otherwise propagate) and compute
self.start_tick.checked_add(self.entries.len() as
u64).ok_or(BtrError::TickOverflow) so overflow is reported; likewise replace all
unchecked increments of expected_tick (e.g., expected_tick += 1) with
expected_tick = expected_tick.checked_add(1).ok_or(BtrError::TickOverflow) and
propagate the error through the surrounding function(s) so overflow yields
BtrError::TickOverflow rather than wrapping or panicking.

In `@docs/adr/ADR-0011-explicit-observation-contract.md`:
- Around line 41-45: The ADR's canonical signature is stale: replace the free
function form "observe(request: ObservationRequest) ->
Result<ObservationArtifact, ObservationError>" with the actual implementation
signature used in the codebase — e.g. document the method form
ObservationService::observe(runtime, provenance, engine, request) and list the
required parameters (runtime, provenance, engine) alongside the request and
return types (ObservationArtifact, ObservationError) so the ADR reflects the
immutable dependency boundary implied by ObservationService::observe and the
types ObservationRequest/ObservationArtifact/ObservationError.

In `@scripts/verify-local.sh`:
- Around line 495-497: The helper should_run_parallel_lanes currently treats any
non-"parallel" value as sequential; change it to validate VERIFY_LANE_MODE
explicitly and fail on unknown values: in should_run_parallel_lanes check for
the allowed values (explicitly "parallel" -> return true, "sequential" or
"serial" as your canonical sequential string -> return false), and for any other
value emit an error message about the invalid VERIFY_LANE_MODE and exit non‑zero
instead of silently falling through; update any callers/tests that expect silent
fallback if needed.
- Around line 22-23: The current stamping uses a single "full" stamp under
VERIFY_LANE_ROOT regardless of VERIFY_LANE_MODE, so a sequential fallback can
create the same stamp and short-circuit later runs; change stamp paths and any
checks that reference the "full" stamp to include VERIFY_LANE_MODE (e.g.,
incorporate "${VERIFY_LANE_MODE}" into the stamp filename or directory used by
the functions that create/check the full stamp), and update all other
occurrences of the same stamp logic (the block around the shown variables and
the duplicated logic referenced at the other location) to use the mode-keyed
stamp or a separate sequential-specific suite name.
- Around line 430-437: The nextest branch hardcodes "--lib --tests" which breaks
bin-only crates; instead detect actual package targets via the manifest or cargo
metadata and build test/clippy flags accordingly. Update the use_nextest branch
in the script to replace the hardcoded "--lib --tests" with the same resolved
args returned by targeted_test_args_for_crate("$crate") (or refactor
targeted_test_args_for_crate to query cargo metadata and return correct flags),
and similarly update clippy_target_args_for_scope() and
crate_supports_lib_target() so they inspect package target types (via cargo
metadata or parsing Cargo.toml) rather than relying on crate name heuristics.
Ensure cargo nextest run and cargo test/clippy invocations use the derived args
array so bin-only crates do not get "--lib".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: d0e8a682-0168-440b-abd4-3c5f319da8c1

📥 Commits

Reviewing files that changed from the base of the PR and between dcb5f65 and 7869932.

📒 Files selected for processing (7)
  • CHANGELOG.md
  • Makefile
  • crates/warp-core/src/provenance_store.rs
  • docs/adr/ADR-0011-explicit-observation-contract.md
  • scripts/hooks/README.md
  • scripts/verify-local.sh
  • tests/hooks/test_verify_local.sh

Comment on lines +347 to +349
pub fn end_tick_exclusive(&self) -> u64 {
self.start_tick + self.entries.len() as u64
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In Rust, what happens when u64addition overflows in debug builds versus release builds, and ischecked_add the correct way to avoid build-dependent behavior?

💡 Result:

  • u64 overflow in a debug build (cargo build / cargo run): Rust enables runtime integer-overflow checks when debug assertions are on, so a + b panics at runtime if it overflows. This behavior is intentionally tied to debug-assertions/overflow-checking being enabled. [1][2]

  • u64 overflow in a release build (cargo build --release / cargo run --release): those overflow checks are typically not emitted for performance, so the same a + b will usually wrap modulo 2⁶⁴ (“two’s complement wrapping”) instead of panicking. [3][1]

  • Avoiding build-dependent behavior: Yes—using checked_add (and friends) is the standard, correct way to make overflow handling explicit and consistent across build profiles: it computes the sum and returns Some(sum) or None on overflow. [4]
    Alternatives depending on desired semantics: wrapping_add (always wrap), saturating_add (clamp), overflowing_add (get (result, overflowed)), or enabling overflow checks in release via Cargo’s overflow-checks profile setting. [5][1]

Sources
[1] Rust RFC 0560 (integer overflow semantics; checks tied to debug assertions) (rust-lang.github.io)
[2] rustc book: exploit mitigations (overflow checks enabled with debug assertions) (doc.rust-lang.org)
[3] TRPL sample (debug panics; release wraps) (nostarch.com)
[4] u64 primitive docs (checked_add returns Option) (doc.rust-lang.org)
[5] Cargo profiles (overflow-checks setting) (rustwiki.org)

Citations:


🏁 Script executed:

cat -n crates/warp-core/src/provenance_store.rs | sed -n '345,352p; 372,390p'

Repository: flyingrobots/echo

Length of output: 1101


🏁 Script executed:

cat -n crates/warp-core/src/provenance_store.rs | sed -n '340,395p'

Repository: flyingrobots/echo

Length of output: 2243


🏁 Script executed:

# Check how coordinator.rs handles tick arithmetic to confirm the learning
rg -A 3 -B 1 "peek_order|tick.*checked" crates/warp-core/src/coordinator.rs | head -40

Repository: flyingrobots/echo

Length of output: 744


🏁 Script executed:

# Search for tick arithmetic patterns in coordinator.rs specifically
rg "tick.*\+|expected.*\+" crates/warp-core/src/coordinator.rs -B 2 -A 2

Repository: flyingrobots/echo

Length of output: 43


🏁 Script executed:

# Search for checked_add usage in warp-core to understand the pattern
rg "checked_add|wrapping_add|saturating_add" crates/warp-core/src/

Repository: flyingrobots/echo

Length of output: 2364


🏁 Script executed:

# Check the context of checked_add in coordinator.rs
rg -B 5 -A 5 "checked_add" crates/warp-core/src/coordinator.rs

Repository: flyingrobots/echo

Length of output: 487


🏁 Script executed:

# Check if there's any documentation or comment about tick bounds in provenance_store.rs
rg -B 10 "start_tick|end_tick" crates/warp-core/src/provenance_store.rs | head -60

Repository: flyingrobots/echo

Length of output: 1996


🏁 Script executed:

# Check if there are any guards or constraints on start_tick or entries.len() that prevent overflow
rg -B 3 -A 3 "BtrPayload {|start_tick.*=" crates/warp-core/src/provenance_store.rs | head -50

Repository: flyingrobots/echo

Length of output: 181


🏁 Script executed:

# Check how BtrPayload instances are constructed
rg "BtrPayload\s*\{" crates/warp-core/src/provenance_store.rs -A 5

Repository: flyingrobots/echo

Length of output: 1022


🏁 Script executed:

# Check if there's a Cargo profile setting that ensures overflow-checks in release
cat Cargo.toml | grep -A 10 "\[profile"

Repository: flyingrobots/echo

Length of output: 293


Use checked arithmetic for payload tick math.

start_tick + self.entries.len() as u64 (line 348) and expected_tick += 1 (line 388) both use unchecked arithmetic. In debug builds this panics on overflow; in release builds it wraps silently. The release profile has no overflow-checks setting, so the same code produces build-dependent behavior. Since warp-core is the deterministic kernel, use checked_add and propagate the overflow error (e.g., BtrError::TickOverflow).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/src/provenance_store.rs` around lines 347 - 349, The tick
arithmetic uses unchecked additions and must use checked_add to avoid
release/debug divergence: change end_tick_exclusive() to return a Result<u64,
BtrError> (or otherwise propagate) and compute
self.start_tick.checked_add(self.entries.len() as
u64).ok_or(BtrError::TickOverflow) so overflow is reported; likewise replace all
unchecked increments of expected_tick (e.g., expected_tick += 1) with
expected_tick = expected_tick.checked_add(1).ok_or(BtrError::TickOverflow) and
propagate the error through the surrounding function(s) so overflow yields
BtrError::TickOverflow rather than wrapping or panicking.

Comment on lines 758 to 769
let new_history = WorldlineHistory {
u0_ref: source_history.u0_ref,
patches: source_history.patches[..end_idx].to_vec(),
expected: source_history.expected[..end_idx].to_vec(),
outputs: source_history.outputs[..end_idx].to_vec(),
atom_writes: source_history.atom_writes[..end_idx].to_vec(),
initial_boundary_hash: source_history.initial_boundary_hash,
entries: source_history.entries[..end_idx].to_vec(),
checkpoints: source_history
.checkpoints
.iter()
.filter(|c| c.tick <= fork_tick)
.copied()
.collect(),
};

self.worldlines.insert(new_id, new_history);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Forking raw ProvenanceEntrys stores source-owned commits under the new worldline.

source_history.entries[..end_idx].to_vec() preserves the source worldline_id, head_key, event_kind, and parent refs, then inserts those records under new_id. After fork(), entry(new_id, 0) still reports the source worldline, tip_ref(new_id) points back at the source history, and build_btr(new_id, ..) fails BtrPayload::validate() because the payload entries do not belong to new_id. Either remap all worldline-local metadata consistently when materializing the copy, or model fork ancestry without storing source entries as target-owned history.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/warp-core/src/provenance_store.rs` around lines 758 - 769, The fork
implementation currently copies source_history.entries[..end_idx] as-is into the
new WorldlineHistory which preserves source-owned metadata (worldline_id,
head_key/tip refs, parent refs) causing entry(new_id, 0), tip_ref(new_id) and
BtrPayload::validate() to fail; fix by materializing a proper target-owned
history in fork(): iterate the entries being copied, clone each ProvenanceEntry
but remap any worldline-local metadata to the new worldline (set worldline_id ->
new_id, rewrite head_key/tip refs to point at the corresponding entry IDs in the
new sequence, and remap parent refs to the new entry IDs), update checkpoints if
they reference old ticks/refs, then insert that remapped Vec into
WorldlineHistory.entries before calling self.worldlines.insert(new_id,
new_history) so build_btr(new_id, ..) and validate succeed.

Comment on lines +41 to +45
The canonical internal entrypoint is:

```rust
observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The normative observe(...) signature is already stale.

The implementation is ObservationService::observe(runtime, provenance, engine, request), not a free observe(request) function. Documenting the shortened form here hides the immutable dependency boundary that section 3 makes normative, so the ADR no longer matches the shipped contract.

📝 Suggested doc fix
-observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>
+ObservationService::observe(
+    runtime: &WorldlineRuntime,
+    provenance: &ProvenanceService,
+    engine: &Engine,
+    request: ObservationRequest,
+) -> Result<ObservationArtifact, ObservationError>

As per coding guidelines, docs/**/*.md: "Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
The canonical internal entrypoint is:
```rust
observe(request: ObservationRequest) -> Result<ObservationArtifact, ObservationError>
```
The canonical internal entrypoint is:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/adr/ADR-0011-explicit-observation-contract.md` around lines 41 - 45, The
ADR's canonical signature is stale: replace the free function form
"observe(request: ObservationRequest) -> Result<ObservationArtifact,
ObservationError>" with the actual implementation signature used in the codebase
— e.g. document the method form ObservationService::observe(runtime, provenance,
engine, request) and list the required parameters (runtime, provenance, engine)
alongside the request and return types (ObservationArtifact, ObservationError)
so the ADR reflects the immutable dependency boundary implied by
ObservationService::observe and the types
ObservationRequest/ObservationArtifact/ObservationError.

Comment on lines +495 to +497
should_run_parallel_lanes() {
[[ "$VERIFY_LANE_MODE" == "parallel" ]]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject unknown VERIFY_LANE_MODE values instead of silently downgrading.

Anything except the exact string parallel falls through to sequential execution. A typo in this new knob changes which verifier runs and still produces a normal full result. Fail closed on unknown values.

Possible fix
 should_run_parallel_lanes() {
-  [[ "$VERIFY_LANE_MODE" == "parallel" ]]
+  case "$VERIFY_LANE_MODE" in
+    parallel) return 0 ;;
+    sequential) return 1 ;;
+    *)
+      echo "[verify-local] invalid VERIFY_LANE_MODE: $VERIFY_LANE_MODE" >&2
+      exit 1
+      ;;
+  esac
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
should_run_parallel_lanes() {
[[ "$VERIFY_LANE_MODE" == "parallel" ]]
}
should_run_parallel_lanes() {
case "$VERIFY_LANE_MODE" in
parallel) return 0 ;;
sequential) return 1 ;;
*)
echo "[verify-local] invalid VERIFY_LANE_MODE: $VERIFY_LANE_MODE" >&2
exit 1
;;
esac
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 495 - 497, The helper
should_run_parallel_lanes currently treats any non-"parallel" value as
sequential; change it to validate VERIFY_LANE_MODE explicitly and fail on
unknown values: in should_run_parallel_lanes check for the allowed values
(explicitly "parallel" -> return true, "sequential" or "serial" as your
canonical sequential string -> return false), and for any other value emit an
error message about the invalid VERIFY_LANE_MODE and exit non‑zero instead of
silently falling through; update any callers/tests that expect silent fallback
if needed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/hooks/test_verify_local.sh (1)

320-324: ⚠️ Potential issue | 🟠 Major

Coverage accounting drops xtask, so this test cannot catch xtask-scope regressions.

Line 320-Line 324 only harvests FULL_CRITICAL_PREFIXES entries starting with crates/, which excludes xtask/. That can keep missing_build/missing_clippy empty even when xtask lane selection is broken.

Proposed fix
 critical_crates = {
     prefix[len("crates/"):-1]
     for prefix in parse_array("FULL_CRITICAL_PREFIXES")
     if prefix.startswith("crates/")
 }
+if "xtask/" in parse_array("FULL_CRITICAL_PREFIXES"):
+    critical_crates.add("xtask")

Also applies to: 333-335

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/hooks/test_verify_local.sh` around lines 320 - 324, critical_crates
currently filters FULL_CRITICAL_PREFIXES to only prefixes starting with
"crates/" which excludes "xtask/" and lets xtask regressions slip; update the
filtering logic in the critical_crates comprehension (and the similar logic at
the second occurrence around lines 333-335) to also include entries that start
with "xtask/" (or otherwise normalize/remove the strict "crates/" requirement),
e.g. accept both "crates/" and "xtask/" prefixes when computing critical_crates
from parse_array("FULL_CRITICAL_PREFIXES") so xtask-scoped entries are harvested
for missing_build/missing_clippy checks.
scripts/verify-local.sh (1)

382-384: ⚠️ Potential issue | 🟠 Major

xtask/ changes are currently downgraded to tooling-only full scope.

Line 382 only extracts crates from crates/<name>/..., so xtask/... never enters FULL_SCOPE_SELECTED_CRATES and Lines 725-730 can route an xtask change to tooling-only, skipping rust clippy/test lanes.

Proposed fix
 list_changed_crates() {
-  printf '%s\n' "$CHANGED_FILES" | sed -n 's#^crates/\([^/]*\)/.*#\1#p' | sort -u
+  {
+    printf '%s\n' "$CHANGED_FILES" | sed -n 's#^crates/\([^/]*\)/.*#\1#p'
+    printf '%s\n' "$CHANGED_FILES" | sed -n 's#^xtask/.*#xtask#p'
+  } | sort -u
 }

Also applies to: 386-394, 721-731

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 382 - 384, The current
list_changed_crates function only extracts names under crates/<name>/..., so
changes under xtask/ never appear in FULL_SCOPE_SELECTED_CRATES and get
misrouted to tooling-only; update list_changed_crates to also detect xtask
changes (e.g., return "xtask" for paths starting with xtask/ or capture either
crates/<name> or xtask as the changed module) so FULL_SCOPE_SELECTED_CRATES
contains xtask when relevant and the downstream routing (the logic referenced by
FULL_SCOPE_SELECTED_CRATES used around the routing lines) will include rust
clippy/test lanes; adjust the same pattern usage in the other occurrences noted
(the similar blocks around where the crate-listing and routing are duplicated).
♻️ Duplicate comments (2)
scripts/verify-local.sh (2)

579-581: ⚠️ Potential issue | 🟡 Minor

Fail closed on invalid VERIFY_LANE_MODE values.

Line 579 treats any non-parallel value as sequential. A typo silently changes execution semantics and still reports a normal full verification path.

Proposed fix
 should_run_parallel_lanes() {
-  [[ "$VERIFY_LANE_MODE" == "parallel" ]]
+  case "$VERIFY_LANE_MODE" in
+    parallel) return 0 ;;
+    sequential) return 1 ;;
+    *)
+      echo "[verify-local] invalid VERIFY_LANE_MODE: $VERIFY_LANE_MODE (expected: parallel|sequential)" >&2
+      exit 1
+      ;;
+  esac
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 579 - 581, The function
should_run_parallel_lanes currently treats any value other than "parallel" as
sequential, which hides typos; add explicit validation of VERIFY_LANE_MODE and
fail closed on invalid values by exiting with a non-zero status and an error
message; update or augment should_run_parallel_lanes (and any init/validation
logic that reads VERIFY_LANE_MODE) to accept only "parallel" or "sequential" (or
empty/default), print a clear error including the invalid value, and exit
non-zero so the script does not silently proceed with the wrong lane mode.

514-517: ⚠️ Potential issue | 🔴 Critical

Resolve target kinds from manifests; current --lib assumptions will break bin-only crates.

Line 515 hardcodes cargo nextest ... --lib --tests, and Lines 640-688 infer lib support with crate != xtask plus default --lib clippy/test args. That still misflags bin-only crates and causes deterministic verifier failures.

Proposed fix
@@
-    if use_nextest; then
-      echo "[verify-local] cargo nextest run -p ${crate} --lib --tests"
-      cargo +"$PINNED" nextest run -p "$crate" --lib --tests
+    if use_nextest; then
+      local -a test_args=()
+      mapfile -t test_args < <(targeted_test_args_for_crate "$crate")
+      echo "[verify-local] cargo nextest run -p ${crate} ${test_args[*]}"
+      cargo +"$PINNED" nextest run -p "$crate" "${test_args[@]}"
@@
 crate_supports_lib_target() {
   local crate="$1"
-  [[ "$crate" != "xtask" ]]
+  local manifest="crates/${crate}/Cargo.toml"
+  [[ -f "crates/${crate}/src/lib.rs" ]] && return 0
+  rg -q '^\[lib\]' "$manifest"
+}
+
+crate_supports_bin_target() {
+  local crate="$1"
+  local manifest="crates/${crate}/Cargo.toml"
+  [[ -f "crates/${crate}/src/main.rs" ]] && return 0
+  rg -q '^\[\[bin\]\]' "$manifest"
 }
@@
-  if [[ "$crate" == "xtask" ]]; then
-    printf '%s\n' "--bins"
-    return
-  fi
-
   if [[ "$scope" == "full" ]]; then
     printf '%s\n' "--all-targets"
     return
   fi
 
-  printf '%s\n' "--lib"
+  if crate_supports_lib_target "$crate"; then
+    printf '%s\n' "--lib"
+  fi
+  if crate_supports_bin_target "$crate"; then
+    printf '%s\n' "--bins"
+  fi
   if ! crate_is_fast_clippy_lib_only "$crate"; then
     printf '%s\n' "--tests"
   fi
 }
@@
-  if [[ "$crate" == "xtask" ]]; then
-    printf '%s\n' "--bins"
-    return
-  fi
-
   if crate_supports_lib_target "$crate"; then
     printf '%s\n' "--lib"
   fi
+  if crate_supports_bin_target "$crate"; then
+    printf '%s\n' "--bins"
+  fi
   printf '%s\n' "--tests"
 }
#!/usr/bin/env bash
set -euo pipefail

echo "[check] crates with no lib target (src/lib.rs or [lib])"
while IFS= read -r manifest; do
  dir="$(dirname "$manifest")"
  crate="$(basename "$dir")"
  has_lib=0
  [[ -f "$dir/src/lib.rs" ]] && has_lib=1
  rg -q '^\[lib\]' "$manifest" && has_lib=1 || true
  if [[ $has_lib -eq 0 ]]; then
    printf '  %s\n' "$crate"
  fi
done < <(fd -a Cargo.toml crates | sort)

echo
echo "[check] current forced --lib call sites"
rg -n 'nextest run -p .*--lib --tests|crate_supports_lib_target|clippy_target_args_for_scope|targeted_test_args_for_crate' scripts/verify-local.sh

Also applies to: 640-688

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 514 - 517, The script currently forces
`--lib` in the `use_nextest` branch (`cargo nextest run -p "$crate" --lib
--tests`) and elsewhere assumes lib support via helpers like
`crate_supports_lib_target`, `clippy_target_args_for_scope`, and
`targeted_test_args_for_crate`; update these call sites to detect whether a
crate has a lib target (inspect the crate's Cargo.toml for a [lib] table or
check for src/lib.rs) and only append `--lib` when a lib target exists; change
the `use_nextest` branch to call `cargo nextest run -p "$crate" ${lib_arg}
--tests` (or equivalent) where `lib_arg` is set to `--lib` only if the detection
indicates a lib target, and update the mentioned helper functions to perform the
same detection so bin-only crates are not passed `--lib`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CHANGELOG.md`:
- Around line 34-35: Update the CHANGELOG entry so it accurately reflects that
execute_query is not yet a full one-phase adapter: instead of grouping
`execute_query` with `get_head`, `snapshot_at`, and `drain_view_ops` as "thin
one-phase adapters", clarify that `execute_query(...)` currently lowers through
observation semantics and returns a deterministic UNSUPPORTED_QUERY until full
query support is implemented; edit the sentence mentioning `execute_query` to
explicitly state its temporary behavior and keep `get_head`, `snapshot_at`, and
`drain_view_ops` described as thin one-phase adapters.
- Around line 23-39: Add explicit changelog entries for ADR-0011, the
ttd-browser migration, and Phase 4 hard cut: mention ADR-0011 (the explicit
observation contract) as a documented architectural decision; add a
breaking-change note that ttd-browser was migrated to the entry-based provenance
API; and enumerate removed/deprecated convenience methods (e.g., get_head,
snapshot_at, execute_query, drain_view_ops) and that these are now thin adapters
over ObservationService::observe/WarpKernel observe. Update the top-level
changelog paragraph to call out the breaking API changes and provide migration
guidance references to ADR-0011 and the ttd-browser update.

In `@scripts/verify-local.sh`:
- Around line 583-638: The run_parallel_lanes function is not signal-safe: add a
signal trap inside run_parallel_lanes that catches INT and TERM (and ERR/EXIT if
desired) to kill all background lane PIDs stored in lane_pids, remove the
temporary logdir, and then re-raise or exit; implement a cleanup handler (e.g.,
cleanup_parallel_lanes) referenced by the trap that iterates over lane_pids and
runs kill (ignoring errors) and waits for them, removes "${logdir}" and exits
with non-zero, and ensure the trap is removed/restored (trap - ...) before
normal exit so the handler only runs for interruptions; update references to
lane_pids and logdir in the handler so the temporary logs are always cleaned on
interrupts.

In `@tests/hooks/test_verify_local.sh`:
- Around line 82-176: The test currently only asserts the Makefile target exists
but not that sequential fallback actually avoids parallel fanout; update the
test harness around run_fake_verify to add a case that sets
VERIFY_LANE_MODE=sequential (and KEEP VERIFY_FAKE_CARGO_LOG), runs
./scripts/verify-local.sh "full" (or invokes the verify-full-sequential flow),
and then assert the VERIFY_FAKE_CARGO_LOG shows only a single cargo invocation
(or lacks any parallel lane markers) to prove fanout was bypassed; use the
existing run_fake_verify, VERIFY_FAKE_CARGO_LOG, VERIFY_CHANGED_FILES_FILE and
VERIFY_FORCE variables and check the cargo_log contents rather than merely
testing the Makefile target name.

---

Outside diff comments:
In `@scripts/verify-local.sh`:
- Around line 382-384: The current list_changed_crates function only extracts
names under crates/<name>/..., so changes under xtask/ never appear in
FULL_SCOPE_SELECTED_CRATES and get misrouted to tooling-only; update
list_changed_crates to also detect xtask changes (e.g., return "xtask" for paths
starting with xtask/ or capture either crates/<name> or xtask as the changed
module) so FULL_SCOPE_SELECTED_CRATES contains xtask when relevant and the
downstream routing (the logic referenced by FULL_SCOPE_SELECTED_CRATES used
around the routing lines) will include rust clippy/test lanes; adjust the same
pattern usage in the other occurrences noted (the similar blocks around where
the crate-listing and routing are duplicated).

In `@tests/hooks/test_verify_local.sh`:
- Around line 320-324: critical_crates currently filters FULL_CRITICAL_PREFIXES
to only prefixes starting with "crates/" which excludes "xtask/" and lets xtask
regressions slip; update the filtering logic in the critical_crates
comprehension (and the similar logic at the second occurrence around lines
333-335) to also include entries that start with "xtask/" (or otherwise
normalize/remove the strict "crates/" requirement), e.g. accept both "crates/"
and "xtask/" prefixes when computing critical_crates from
parse_array("FULL_CRITICAL_PREFIXES") so xtask-scoped entries are harvested for
missing_build/missing_clippy checks.

---

Duplicate comments:
In `@scripts/verify-local.sh`:
- Around line 579-581: The function should_run_parallel_lanes currently treats
any value other than "parallel" as sequential, which hides typos; add explicit
validation of VERIFY_LANE_MODE and fail closed on invalid values by exiting with
a non-zero status and an error message; update or augment
should_run_parallel_lanes (and any init/validation logic that reads
VERIFY_LANE_MODE) to accept only "parallel" or "sequential" (or empty/default),
print a clear error including the invalid value, and exit non-zero so the script
does not silently proceed with the wrong lane mode.
- Around line 514-517: The script currently forces `--lib` in the `use_nextest`
branch (`cargo nextest run -p "$crate" --lib --tests`) and elsewhere assumes lib
support via helpers like `crate_supports_lib_target`,
`clippy_target_args_for_scope`, and `targeted_test_args_for_crate`; update these
call sites to detect whether a crate has a lib target (inspect the crate's
Cargo.toml for a [lib] table or check for src/lib.rs) and only append `--lib`
when a lib target exists; change the `use_nextest` branch to call `cargo nextest
run -p "$crate" ${lib_arg} --tests` (or equivalent) where `lib_arg` is set to
`--lib` only if the detection indicates a lib target, and update the mentioned
helper functions to perform the same detection so bin-only crates are not passed
`--lib`.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: e0faf51c-50eb-4871-af5c-d686c2836763

📥 Commits

Reviewing files that changed from the base of the PR and between 7869932 and 755c2bf.

📒 Files selected for processing (4)
  • CHANGELOG.md
  • scripts/hooks/README.md
  • scripts/verify-local.sh
  • tests/hooks/test_verify_local.sh

Comment on lines +23 to +39
### feat(warp-core): close Phase 4 and pivot reads to observe

- **Changed** Phase 4 provenance/BTR work is now the documented substrate
baseline: provenance is entry-based, parent refs are stored explicitly, and
the standalone `ProvenanceService` owns authoritative worldline history.
- **Added** `ObservationService::observe(...)` as the canonical internal read
path with explicit worldline, coordinate, frame, and projection semantics.
- **Added** deterministic observation artifacts and error mapping:
`INVALID_WORLDLINE`, `INVALID_TICK`, `UNSUPPORTED_FRAME_PROJECTION`,
`UNSUPPORTED_QUERY`, and `OBSERVATION_UNAVAILABLE`.
- **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
`get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
one-phase adapters over the observation contract.
- **Changed** `drain_view_ops()` is now legacy adapter/debug behavior only: it
reads recorded truth through `observe(...)` and tracks only adapter-local
drain state instead of mutating runtime-owned materialization state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing key deliverables from the changelog entry.

The PR objectives explicitly mention several items that are absent from this changelog entry:

  1. ADR-0011: The PR summary states "Added ADR-0011 for the explicit observation contract" — this is a significant architectural decision that should be documented in the changelog.

  2. ttd-browser migration: The PR summary notes "Migrated ttd-browser to the entry-based provenance API after Phase 4 hard cut removed old convenience methods" — this is a breaking change for that crate and should be listed.

  3. Phase 4 hard cut: The phrase "removed old convenience methods" implies breaking API removals, but the changelog doesn't explicitly list what was removed or deprecated.

Proposed additions:

📝 Suggested changelog additions
 ### feat(warp-core): close Phase 4 and pivot reads to observe
 
+- **Added** ADR-0011 documenting the explicit observation contract with
+  worldline, coordinate, frame, and projection semantics.
 - **Changed** Phase 4 provenance/BTR work is now the documented substrate
   baseline: provenance is entry-based, parent refs are stored explicitly, and
   the standalone `ProvenanceService` owns authoritative worldline history.
 - **Added** `ObservationService::observe(...)` as the canonical internal read
   path with explicit worldline, coordinate, frame, and projection semantics.
 - **Added** deterministic observation artifacts and error mapping:
   `INVALID_WORLDLINE`, `INVALID_TICK`, `UNSUPPORTED_FRAME_PROJECTION`,
   `UNSUPPORTED_QUERY`, and `OBSERVATION_UNAVAILABLE`.
 - **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
   `get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
   one-phase adapters over the observation contract.
 - **Changed** `drain_view_ops()` is now legacy adapter/debug behavior only: it
   reads recorded truth through `observe(...)` and tracks only adapter-local
   drain state instead of mutating runtime-owned materialization state.
+- **Changed** `ttd-browser` migrated to the entry-based provenance API after
+  Phase 4 hard cut removed old convenience methods.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### feat(warp-core): close Phase 4 and pivot reads to observe
- **Changed** Phase 4 provenance/BTR work is now the documented substrate
baseline: provenance is entry-based, parent refs are stored explicitly, and
the standalone `ProvenanceService` owns authoritative worldline history.
- **Added** `ObservationService::observe(...)` as the canonical internal read
path with explicit worldline, coordinate, frame, and projection semantics.
- **Added** deterministic observation artifacts and error mapping:
`INVALID_WORLDLINE`, `INVALID_TICK`, `UNSUPPORTED_FRAME_PROJECTION`,
`UNSUPPORTED_QUERY`, and `OBSERVATION_UNAVAILABLE`.
- **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
`get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
one-phase adapters over the observation contract.
- **Changed** `drain_view_ops()` is now legacy adapter/debug behavior only: it
reads recorded truth through `observe(...)` and tracks only adapter-local
drain state instead of mutating runtime-owned materialization state.
### feat(warp-core): close Phase 4 and pivot reads to observe
- **Added** ADR-0011 documenting the explicit observation contract with
worldline, coordinate, frame, and projection semantics.
- **Changed** Phase 4 provenance/BTR work is now the documented substrate
baseline: provenance is entry-based, parent refs are stored explicitly, and
the standalone `ProvenanceService` owns authoritative worldline history.
- **Added** `ObservationService::observe(...)` as the canonical internal read
path with explicit worldline, coordinate, frame, and projection semantics.
- **Added** deterministic observation artifacts and error mapping:
`INVALID_WORLDLINE`, `INVALID_TICK`, `UNSUPPORTED_FRAME_PROJECTION`,
`UNSUPPORTED_QUERY`, and `OBSERVATION_UNAVAILABLE`.
- **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
`get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
one-phase adapters over the observation contract.
- **Changed** `drain_view_ops()` is now legacy adapter/debug behavior only: it
reads recorded truth through `observe(...)` and tracks only adapter-local
drain state instead of mutating runtime-owned materialization state.
- **Changed** `ttd-browser` migrated to the entry-based provenance API after
Phase 4 hard cut removed old convenience methods.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 23 - 39, Add explicit changelog entries for
ADR-0011, the ttd-browser migration, and Phase 4 hard cut: mention ADR-0011 (the
explicit observation contract) as a documented architectural decision; add a
breaking-change note that ttd-browser was migrated to the entry-based provenance
API; and enumerate removed/deprecated convenience methods (e.g., get_head,
snapshot_at, execute_query, drain_view_ops) and that these are now thin adapters
over ObservationService::observe/WarpKernel observe. Update the top-level
changelog paragraph to call out the breaking API changes and provide migration
guidance references to ADR-0011 and the ttd-browser update.

Comment on lines +34 to +35
`get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
one-phase adapters over the observation contract.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Verify execute_query behavior claim.

Line 34 claims that execute_query is a "thin one-phase adapter" over the observation contract, but the PR objectives state "execute_query(...) currently lowers through observation semantics and returns deterministic UNSUPPORTED_QUERY until full query support is implemented."

This suggests execute_query isn't a true adapter yet but rather returns an error. The changelog should clarify this temporary state.

📝 Suggested clarification
 - **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
-  `get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
-  one-phase adapters over the observation contract.
+  `get_head`, `snapshot_at`, and `drain_view_ops` are thin one-phase adapters
+  over the observation contract. `execute_query` currently returns deterministic
+  `UNSUPPORTED_QUERY` until full query support is implemented.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
`get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
one-phase adapters over the observation contract.
`get_head`, `snapshot_at`, and `drain_view_ops` are thin one-phase adapters
over the observation contract. `execute_query` currently returns deterministic
`UNSUPPORTED_QUERY` until full query support is implemented.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 34 - 35, Update the CHANGELOG entry so it
accurately reflects that execute_query is not yet a full one-phase adapter:
instead of grouping `execute_query` with `get_head`, `snapshot_at`, and
`drain_view_ops` as "thin one-phase adapters", clarify that `execute_query(...)`
currently lowers through observation semantics and returns a deterministic
UNSUPPORTED_QUERY until full query support is implemented; edit the sentence
mentioning `execute_query` to explicitly state its temporary behavior and keep
`get_head`, `snapshot_at`, and `drain_view_ops` described as thin one-phase
adapters.

Comment on lines +583 to +638
run_parallel_lanes() {
local suite="$1"
shift

local logdir
logdir="$(mktemp -d "${TMPDIR:-/tmp}/verify-local-${suite}.XXXXXX")"
local -a lane_names=()
local -a lane_funcs=()
local -a lane_pids=()
local i

while [[ $# -gt 0 ]]; do
lane_names+=("$1")
lane_funcs+=("$2")
shift 2
done

echo "[verify-local] ${suite}: launching ${#lane_names[@]} local lanes"
echo "[verify-local] ${suite}: lanes=${lane_names[*]}"
for i in "${!lane_names[@]}"; do
(
set -euo pipefail
"${lane_funcs[$i]}"
) >"${logdir}/${lane_names[$i]}.log" 2>&1 &
lane_pids+=("$!")
done

local failed=0
local rc
set +e
for i in "${!lane_names[@]}"; do
wait "${lane_pids[$i]}"
rc=$?
if [[ $rc -ne 0 ]]; then
failed=1
echo "[verify-local] lane failed: ${lane_names[$i]}" >&2
fi
done
set -e

if [[ $failed -ne 0 ]]; then
for i in "${!lane_names[@]}"; do
local logfile="${logdir}/${lane_names[$i]}.log"
if [[ ! -s "$logfile" ]]; then
continue
fi
echo
echo "--- ${lane_names[$i]} ---" >&2
cat "$logfile" >&2
done
rm -rf "$logdir"
exit 1
fi

rm -rf "$logdir"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Parallel lane cleanup is not signal-safe.

Lines 603-608 spawn background jobs, but there is no INT/TERM trap to kill lane children. Interrupting the run can leave cargo processes alive and leak temporary log directories.

Proposed fix
 run_parallel_lanes() {
@@
   local -a lane_names=()
   local -a lane_funcs=()
   local -a lane_pids=()
+  cleanup_parallel_lanes() {
+    local pid
+    for pid in "${lane_pids[@]:-}"; do
+      kill "$pid" 2>/dev/null || true
+    done
+    rm -rf "$logdir"
+  }
+  trap 'cleanup_parallel_lanes; exit 130' INT TERM
@@
   if [[ $failed -ne 0 ]]; then
@@
-    rm -rf "$logdir"
-    exit 1
+    trap - INT TERM
+    rm -rf "$logdir"
+    exit 1
   fi
 
+  trap - INT TERM
   rm -rf "$logdir"
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run_parallel_lanes() {
local suite="$1"
shift
local logdir
logdir="$(mktemp -d "${TMPDIR:-/tmp}/verify-local-${suite}.XXXXXX")"
local -a lane_names=()
local -a lane_funcs=()
local -a lane_pids=()
local i
while [[ $# -gt 0 ]]; do
lane_names+=("$1")
lane_funcs+=("$2")
shift 2
done
echo "[verify-local] ${suite}: launching ${#lane_names[@]} local lanes"
echo "[verify-local] ${suite}: lanes=${lane_names[*]}"
for i in "${!lane_names[@]}"; do
(
set -euo pipefail
"${lane_funcs[$i]}"
) >"${logdir}/${lane_names[$i]}.log" 2>&1 &
lane_pids+=("$!")
done
local failed=0
local rc
set +e
for i in "${!lane_names[@]}"; do
wait "${lane_pids[$i]}"
rc=$?
if [[ $rc -ne 0 ]]; then
failed=1
echo "[verify-local] lane failed: ${lane_names[$i]}" >&2
fi
done
set -e
if [[ $failed -ne 0 ]]; then
for i in "${!lane_names[@]}"; do
local logfile="${logdir}/${lane_names[$i]}.log"
if [[ ! -s "$logfile" ]]; then
continue
fi
echo
echo "--- ${lane_names[$i]} ---" >&2
cat "$logfile" >&2
done
rm -rf "$logdir"
exit 1
fi
rm -rf "$logdir"
}
run_parallel_lanes() {
local suite="$1"
shift
local logdir
logdir="$(mktemp -d "${TMPDIR:-/tmp}/verify-local-${suite}.XXXXXX")"
local -a lane_names=()
local -a lane_funcs=()
local -a lane_pids=()
local i
cleanup_parallel_lanes() {
local pid
for pid in "${lane_pids[@]:-}"; do
kill "$pid" 2>/dev/null || true
done
rm -rf "$logdir"
}
trap 'cleanup_parallel_lanes; exit 130' INT TERM
while [[ $# -gt 0 ]]; do
lane_names+=("$1")
lane_funcs+=("$2")
shift 2
done
echo "[verify-local] ${suite}: launching ${`#lane_names`[@]} local lanes"
echo "[verify-local] ${suite}: lanes=${lane_names[*]}"
for i in "${!lane_names[@]}"; do
(
set -euo pipefail
"${lane_funcs[$i]}"
) >"${logdir}/${lane_names[$i]}.log" 2>&1 &
lane_pids+=("$!")
done
local failed=0
local rc
set +e
for i in "${!lane_names[@]}"; do
wait "${lane_pids[$i]}"
rc=$?
if [[ $rc -ne 0 ]]; then
failed=1
echo "[verify-local] lane failed: ${lane_names[$i]}" >&2
fi
done
set -e
if [[ $failed -ne 0 ]]; then
for i in "${!lane_names[@]}"; do
local logfile="${logdir}/${lane_names[$i]}.log"
if [[ ! -s "$logfile" ]]; then
continue
fi
echo
echo "--- ${lane_names[$i]} ---" >&2
cat "$logfile" >&2
done
trap - INT TERM
rm -rf "$logdir"
exit 1
fi
trap - INT TERM
rm -rf "$logdir"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 583 - 638, The run_parallel_lanes
function is not signal-safe: add a signal trap inside run_parallel_lanes that
catches INT and TERM (and ERR/EXIT if desired) to kill all background lane PIDs
stored in lane_pids, remove the temporary logdir, and then re-raise or exit;
implement a cleanup handler (e.g., cleanup_parallel_lanes) referenced by the
trap that iterates over lane_pids and runs kill (ignoring errors) and waits for
them, removes "${logdir}" and exits with non-zero, and ensure the trap is
removed/restored (trap - ...) before normal exit so the handler only runs for
interruptions; update references to lane_pids and logdir in the handler so the
temporary logs are always cleaned on interrupts.

Comment on lines +82 to +176
run_fake_verify() {
local mode="$1"
local changed_file="$2"
local tmp
tmp="$(mktemp -d)"

mkdir -p "$tmp/scripts" "$tmp/bin" "$tmp/.git" "$tmp/tests/hooks"
mkdir -p "$tmp/crates/warp-core/src"
cp scripts/verify-local.sh "$tmp/scripts/verify-local.sh"
chmod +x "$tmp/scripts/verify-local.sh"

cat >"$tmp/rust-toolchain.toml" <<'EOF'
[toolchain]
channel = "1.90.0"
EOF

cat >"$tmp/crates/warp-core/Cargo.toml" <<'EOF'
[package]
name = "warp-core"
version = "0.0.0"
edition = "2021"
EOF
printf '%s\n' 'pub fn anchor() {}' >"$tmp/crates/warp-core/src/lib.rs"

cat >"$tmp/bin/cargo" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
printf '%s|%s\n' "${CARGO_TARGET_DIR:-}" "$*" >>"${VERIFY_FAKE_CARGO_LOG}"
exit 0
EOF
cat >"$tmp/bin/rustup" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
if [[ "${1:-}" == "toolchain" && "${2:-}" == "list" ]]; then
printf '1.90.0-aarch64-apple-darwin (default)\n'
exit 0
fi
exit 0
EOF
cat >"$tmp/bin/rg" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
exit 1
EOF
cat >"$tmp/bin/npx" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
exit 0
EOF
cat >"$tmp/bin/git" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
if [[ "${1:-}" == "rev-parse" && "${2:-}" == "HEAD" ]]; then
printf 'test-head\n'
exit 0
fi
if [[ "${1:-}" == "rev-parse" && "${2:-}" == "--short" && "${3:-}" == "HEAD" ]]; then
printf 'test-head\n'
exit 0
fi
exit 0
EOF
chmod +x "$tmp/bin/cargo" "$tmp/bin/rustup" "$tmp/bin/rg" "$tmp/bin/npx" "$tmp/bin/git"

cat >"$tmp/tests/hooks/test_verify_local.sh" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
echo "fake hook coverage"
EOF
chmod +x "$tmp/tests/hooks/test_verify_local.sh"

local changed
changed="$(mktemp)"
printf '%s\n' "$changed_file" >"$changed"
local cargo_log
cargo_log="$(mktemp)"

local output
output="$(
cd "$tmp" && \
PATH="$tmp/bin:$PATH" \
VERIFY_FORCE=1 \
VERIFY_STAMP_SUBJECT="test-head" \
VERIFY_CHANGED_FILES_FILE="$changed" \
VERIFY_FAKE_CARGO_LOG="$cargo_log" \
./scripts/verify-local.sh "$mode"
)"

printf '%s\n' "$output"
echo "--- cargo-log ---"
cat "$cargo_log"

rm -f "$changed" "$cargo_log"
rm -rf "$tmp"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Sequential fallback is existence-tested, not behavior-tested.

Line 365-Line 369 only checks that verify-full-sequential exists in Makefile. It does not validate that VERIFY_LANE_MODE=sequential actually bypasses parallel lane fanout.

Proposed fix
 run_fake_verify() {
   local mode="$1"
   local changed_file="$2"
+  local lane_mode="${3:-parallel}"
@@
     PATH="$tmp/bin:$PATH" \
     VERIFY_FORCE=1 \
+    VERIFY_LANE_MODE="$lane_mode" \
     VERIFY_STAMP_SUBJECT="test-head" \
     VERIFY_CHANGED_FILES_FILE="$changed" \
     VERIFY_FAKE_CARGO_LOG="$cargo_log" \
     ./scripts/verify-local.sh "$mode"
@@
 if grep -q '^verify-full-sequential:' Makefile; then
   pass "Makefile exposes a sequential fallback for the parallel full verifier"
 else
   fail "Makefile should expose verify-full-sequential as a fallback path"
 fi
+
+fake_full_seq_output="$(run_fake_verify full crates/warp-core/src/lib.rs sequential)"
+if printf '%s\n' "$fake_full_seq_output" | grep -q '\[verify-local\] full: launching '; then
+  fail "sequential fallback should not launch parallel lanes"
+  printf '%s\n' "$fake_full_seq_output"
+else
+  pass "sequential fallback dispatches through the non-parallel runner"
+fi

Also applies to: 365-369

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/hooks/test_verify_local.sh` around lines 82 - 176, The test currently
only asserts the Makefile target exists but not that sequential fallback
actually avoids parallel fanout; update the test harness around run_fake_verify
to add a case that sets VERIFY_LANE_MODE=sequential (and KEEP
VERIFY_FAKE_CARGO_LOG), runs ./scripts/verify-local.sh "full" (or invokes the
verify-full-sequential flow), and then assert the VERIFY_FAKE_CARGO_LOG shows
only a single cargo invocation (or lacks any parallel lane markers) to prove
fanout was bypassed; use the existing run_fake_verify, VERIFY_FAKE_CARGO_LOG,
VERIFY_CHANGED_FILES_FILE and VERIFY_FORCE variables and check the cargo_log
contents rather than merely testing the Makefile target name.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
CHANGELOG.md (2)

37-39: ⚠️ Potential issue | 🟡 Minor

execute_query behavior is documented incorrectly.

Current text says execute_query is a thin adapter; PR objectives state it currently returns deterministic UNSUPPORTED_QUERY. Document that temporary behavior explicitly.

Proposed wording fix
 - **Changed** `WarpKernel` and the WASM ABI now expose `observe(...)`, while
-  `get_head`, `snapshot_at`, `execute_query`, and `drain_view_ops` are thin
-  one-phase adapters over the observation contract.
+  `get_head`, `snapshot_at`, and `drain_view_ops` are thin one-phase adapters
+  over the observation contract. `execute_query` currently returns
+  deterministic `UNSUPPORTED_QUERY` until full query support lands.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 37 - 39, Update the CHANGELOG entry to correctly
document execute_query's current behavior: instead of saying it is a "thin
one-phase adapter," explicitly state that execute_query currently returns a
deterministic UNSUPPORTED_QUERY placeholder value (temporary behavior), while
observe(...) and the other adapters (get_head, snapshot_at, drain_view_ops)
remain as described; reference WarpKernel and the WASM ABI in the same sentence
so readers know which interfaces exhibit this temporary execute_query behavior.

27-43: ⚠️ Potential issue | 🟠 Major

Missing release-note items for shipped contract artifacts and migration impact.

This entry omits three material changes called out in the PR objectives: ADR-0011 adoption, ttd-browser migration, and the Phase 4 hard-cut/removal impact. Add them explicitly so consumers can map behavior changes to migration docs.

Proposed changelog patch
 ### feat(warp-core): close Phase 4 and pivot reads to observe

+- **Added** ADR-0011 documenting the explicit Phase 5 observation contract.
 - **Changed** Phase 4 provenance/BTR work is now the documented substrate
   baseline: provenance is entry-based, parent refs are stored explicitly, and
   the standalone `ProvenanceService` owns authoritative worldline history.
@@
 - **Changed** `drain_view_ops()` is now legacy adapter/debug behavior only: it
   reads recorded truth through `observe(...)` and tracks only adapter-local
   drain state instead of mutating runtime-owned materialization state.
+- **Changed** `ttd-browser` migrated to the entry-based provenance API after
+  the Phase 4 hard cut removed old convenience methods.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 27 - 43, The changelog entry omits three required
release-note items; update the "feat(warp-core): close Phase 4 and pivot reads
to observe" section to explicitly list (1) ADR-0011 adoption (note the
contract/patterns now required by ADR-0011), (2) ttd-browser migration guidance
(what consumers must change and any compatibility shims), and (3) Phase 4
hard-cut/removal impact (which APIs/behaviors are removed and links to migration
docs); mention shipped contract artifacts (ObservationService::observe, error
codes like INVALID_WORLDLINE/INVALID_TICK) and succinctly state migration impact
and where to find the migration docs/ADR for each item so downstream consumers
can map behavior changes to migration steps.
scripts/verify-local.sh (1)

651-654: ⚠️ Potential issue | 🟡 Minor

Manifest-based detection needed for crate_supports_lib_target.

Currently relies on name matching (!= "xtask"). Bin-only crates—echo-session-service, echo-session-ws-gateway, echo-wesley-gen, warp-benches, warp-viewer—should return false but only do so incidentally by exclusion from FULL_LOCAL_PACKAGES. If a bin-only crate is ever added to FULL_LOCAL_PACKAGES, cargo test --lib will fail silently or with a confusing error.

Inspect Cargo.toml for [lib] section instead of name pattern to make the check deterministic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 651 - 654, The current
crate_supports_lib_target function wrongly infers lib-support by name; change it
to deterministically inspect the crate's Cargo.toml for a [lib] table (or an
explicit package.lib entry) instead of the name check. Update
crate_supports_lib_target to locate the crate directory from the given $crate,
read its Cargo.toml, and return success only if it contains a [lib] section (or
a package.lib key); otherwise return false so bin-only crates like
echo-session-service are correctly identified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/verify-local.sh`:
- Around line 667-685: The function clippy_target_args_for_scope unconditionally
emits "--lib" for non-xtask crates which breaks on bin-only crates; update it to
call crate_supports_lib_target "$crate" before printing "--lib" and, if that
returns false, print "--bins" instead; also ensure the conditional that prints
"--tests" only runs when the crate supports a lib/targets (use
crate_supports_lib_target and crate_is_fast_clippy_lib_only together so you
don't emit "--tests" for crates that are bin-only or fast-clippy-lib-only).
- Around line 320-339: The array expansion in array_contains and append_unique
(and call sites such as filter_package_set_by_selection) can trigger unbound
variable errors under set -u on older bash when arrays are empty; fix by using
the portable expansion pattern when passing or expanding arrays (e.g. replace
uses of "${array_ref[@]}" with the safe form ${array_ref[@]+"${array_ref[@]}"}
wherever array_ref (or other arrays) may be empty) and update all call sites of
array_contains and append_unique to pass arrays with the same pattern so
empty-array calls do not fail.

---

Duplicate comments:
In `@CHANGELOG.md`:
- Around line 37-39: Update the CHANGELOG entry to correctly document
execute_query's current behavior: instead of saying it is a "thin one-phase
adapter," explicitly state that execute_query currently returns a deterministic
UNSUPPORTED_QUERY placeholder value (temporary behavior), while observe(...) and
the other adapters (get_head, snapshot_at, drain_view_ops) remain as described;
reference WarpKernel and the WASM ABI in the same sentence so readers know which
interfaces exhibit this temporary execute_query behavior.
- Around line 27-43: The changelog entry omits three required release-note
items; update the "feat(warp-core): close Phase 4 and pivot reads to observe"
section to explicitly list (1) ADR-0011 adoption (note the contract/patterns now
required by ADR-0011), (2) ttd-browser migration guidance (what consumers must
change and any compatibility shims), and (3) Phase 4 hard-cut/removal impact
(which APIs/behaviors are removed and links to migration docs); mention shipped
contract artifacts (ObservationService::observe, error codes like
INVALID_WORLDLINE/INVALID_TICK) and succinctly state migration impact and where
to find the migration docs/ADR for each item so downstream consumers can map
behavior changes to migration steps.

In `@scripts/verify-local.sh`:
- Around line 651-654: The current crate_supports_lib_target function wrongly
infers lib-support by name; change it to deterministically inspect the crate's
Cargo.toml for a [lib] table (or an explicit package.lib entry) instead of the
name check. Update crate_supports_lib_target to locate the crate directory from
the given $crate, read its Cargo.toml, and return success only if it contains a
[lib] section (or a package.lib key); otherwise return false so bin-only crates
like echo-session-service are correctly identified.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 57710df1-eb4f-4f37-bd25-f581a46af511

📥 Commits

Reviewing files that changed from the base of the PR and between 755c2bf and c52ad30.

📒 Files selected for processing (4)
  • CHANGELOG.md
  • scripts/hooks/README.md
  • scripts/verify-local.sh
  • tests/hooks/test_verify_local.sh

Comment on lines +320 to +339
array_contains() {
local needle="$1"
shift
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}

append_unique() {
local value="$1"
local array_name="$2"
local -n array_ref="$array_name"
if ! array_contains "$value" "${array_ref[@]}"; then
array_ref+=("$value")
fi
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty array expansion can fail under set -u on older bash.

Line 336 expands "${array_ref[@]}" — if array_ref is empty, bash versions before 4.4 will raise an unbound variable error under set -u. Same issue in filter_package_set_by_selection (line 709) and multiple array_contains call sites.

Portable fix
 append_unique() {
   local value="$1"
   local array_name="$2"
   local -n array_ref="$array_name"
-  if ! array_contains "$value" "${array_ref[@]}"; then
+  if ! array_contains "$value" ${array_ref[@]+"${array_ref[@]}"}; then
     array_ref+=("$value")
   fi
 }

Apply the same ${arr[@]+"${arr[@]}"} pattern wherever empty arrays are passed to functions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
array_contains() {
local needle="$1"
shift
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
append_unique() {
local value="$1"
local array_name="$2"
local -n array_ref="$array_name"
if ! array_contains "$value" "${array_ref[@]}"; then
array_ref+=("$value")
fi
}
array_contains() {
local needle="$1"
shift
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
append_unique() {
local value="$1"
local array_name="$2"
local -n array_ref="$array_name"
if ! array_contains "$value" ${array_ref[@]+"${array_ref[@]}"}; then
array_ref+=("$value")
fi
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 320 - 339, The array expansion in
array_contains and append_unique (and call sites such as
filter_package_set_by_selection) can trigger unbound variable errors under set
-u on older bash when arrays are empty; fix by using the portable expansion
pattern when passing or expanding arrays (e.g. replace uses of "${array_ref[@]}"
with the safe form ${array_ref[@]+"${array_ref[@]}"} wherever array_ref (or
other arrays) may be empty) and update all call sites of array_contains and
append_unique to pass arrays with the same pattern so empty-array calls do not
fail.

Comment on lines +667 to +685
clippy_target_args_for_scope() {
local crate="$1"
local scope="$2"

if [[ "$crate" == "xtask" ]]; then
printf '%s\n' "--bins"
return
fi

if [[ "$scope" == "full" ]]; then
printf '%s\n' "--all-targets"
return
fi

printf '%s\n' "--lib"
if ! crate_is_fast_clippy_lib_only "$crate"; then
printf '%s\n' "--tests"
fi
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

clippy_target_args_for_scope assumes lib target exists for non-xtask crates.

Line 681 unconditionally emits --lib for any crate not named xtask. If a bin-only crate ever enters the targeted or full clippy path, clippy will error. This shares the root cause with crate_supports_lib_target.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 667 - 685, The function
clippy_target_args_for_scope unconditionally emits "--lib" for non-xtask crates
which breaks on bin-only crates; update it to call crate_supports_lib_target
"$crate" before printing "--lib" and, if that returns false, print "--bins"
instead; also ensure the conditional that prints "--tests" only runs when the
crate supports a lib/targets (use crate_supports_lib_target and
crate_is_fast_clippy_lib_only together so you don't emit "--tests" for crates
that are bin-only or fast-clippy-lib-only).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
scripts/verify-local.sh (4)

333-340: ⚠️ Potential issue | 🟡 Minor

Empty array expansion fails under set -u on bash < 4.4.

Line 337 expands "${array_ref[@]}" directly. If the array is empty, bash versions before 4.4 raise an unbound variable error under set -u. Same issue in filter_package_set_by_selection at line 716.

Portable fix
 append_unique() {
   local value="$1"
   local array_name="$2"
   local -n array_ref="$array_name"
-  if ! array_contains "$value" "${array_ref[@]}"; then
+  if ! array_contains "$value" ${array_ref[@]+"${array_ref[@]}"}; then
     array_ref+=("$value")
   fi
 }

Apply the same ${arr[@]+"${arr[@]}"} pattern at line 716 in filter_package_set_by_selection.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 333 - 340, The append_unique function
and filter_package_set_by_selection use "${array_ref[@]}" which under set -u on
bash <4.4 can raise an unbound variable error when the array is empty; change
the expansions to the portable form ${array_ref[@]+"${array_ref[@]}"} (and the
analogous ${arr[@]+"${arr[@]}"} in filter_package_set_by_selection) so empty
arrays don't trigger unbound-variable errors while preserving normal expansion
behavior.

597-599: ⚠️ Potential issue | 🟡 Minor

Unknown VERIFY_LANE_MODE silently falls back to sequential.

Any typo or invalid value falls through to sequential execution without warning. This can mask configuration errors.

Proposed fix
 should_run_parallel_lanes() {
-  [[ "$VERIFY_LANE_MODE" == "parallel" ]]
+  case "$VERIFY_LANE_MODE" in
+    parallel) return 0 ;;
+    sequential) return 1 ;;
+    *)
+      echo "[verify-local] invalid VERIFY_LANE_MODE: $VERIFY_LANE_MODE" >&2
+      exit 1
+      ;;
+  esac
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 597 - 599, The helper
should_run_parallel_lanes currently treats any non-"parallel" VERIFY_LANE_MODE
as sequential silently; update it to validate VERIFY_LANE_MODE explicitly and
emit a clear warning or error on unknown values: in the
should_run_parallel_lanes function check for allowed values ("parallel" and
"sequential"), return true only for "parallel", return false for "sequential",
and for any other value log a warning (or exit with non-zero) indicating an
invalid VERIFY_LANE_MODE so typos/configuration mistakes are surfaced; reference
the VERIFY_LANE_MODE variable and should_run_parallel_lanes function when making
this change.

601-656: ⚠️ Potential issue | 🟠 Major

Parallel lane cleanup is not signal-safe.

run_parallel_lanes spawns background jobs but lacks INT/TERM traps. Ctrl+C leaves cargo processes alive and leaks the temp logdir.

Proposed fix
 run_parallel_lanes() {
   local suite="$1"
   shift

   local logdir
   logdir="$(mktemp -d "${TMPDIR:-/tmp}/verify-local-${suite}.XXXXXX")"
   local -a lane_names=()
   local -a lane_funcs=()
   local -a lane_pids=()
   local i

+  cleanup_parallel_lanes() {
+    local pid
+    for pid in "${lane_pids[@]:-}"; do
+      kill "$pid" 2>/dev/null || true
+    done
+    rm -rf "$logdir"
+  }
+  trap 'cleanup_parallel_lanes; exit 130' INT TERM
+
   while [[ $# -gt 0 ]]; do

And before the final rm -rf "$logdir":

+  trap - INT TERM
   rm -rf "$logdir"
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 601 - 656, The run_parallel_lanes
function can leak child processes and the temporary logdir on SIGINT/SIGTERM;
add a signal trap that, when triggered, iterates over lane_pids to kill (and
wait for) each background PID, prints which lane was interrupted using
lane_names, cleans up the logdir, and then exits with a non-zero status; ensure
you set the trap after logdir is created and before launching children and
restore/clear the trap before the normal exit path so normal completion still
performs the existing rm -rf "$logdir".

532-540: ⚠️ Potential issue | 🟠 Major

Nextest path hardcodes --lib --tests without checking for lib targets.

Line 533-534 unconditionally passes --lib --tests to nextest. Bin-only crates (e.g., xtask) will fail. The non-nextest path at line 537 uses targeted_test_args_for_crate which handles this, but the nextest branch does not.

Proposed fix
     if use_nextest; then
-      echo "[verify-local] cargo nextest run -p ${crate} --lib --tests"
-      cargo +"$PINNED" nextest run -p "$crate" --lib --tests
+      local -a test_args=()
+      mapfile -t test_args < <(targeted_test_args_for_crate "$crate")
+      echo "[verify-local] cargo nextest run -p ${crate} ${test_args[*]}"
+      cargo +"$PINNED" nextest run -p "$crate" "${test_args[@]}"
     else
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/verify-local.sh` around lines 532 - 540, The nextest branch
unconditionally adds "--lib --tests" causing failures for bin-only crates;
change the cargo nextest invocation in the use_nextest true branch to compute
test args the same way as the non-nextest branch by calling
targeted_test_args_for_crate "$crate" (or reusing its output) and pass those
args to cargo +"$PINNED" nextest run -p "$crate" instead of hardcoding --lib
--tests; ensure you preserve echo/logging of the exact command and use
"${test_args[@]}" when invoking nextest so bin-only crates won't receive invalid
flags.
tests/hooks/test_verify_local.sh (1)

365-369: 🧹 Nitpick | 🔵 Trivial

Sequential fallback behavior remains untested.

The test only verifies the Makefile target exists. It does not confirm that VERIFY_LANE_MODE=sequential actually bypasses parallel fanout.

Proposed test
 if grep -q '^verify-full-sequential:' Makefile; then
   pass "Makefile exposes a sequential fallback for the parallel full verifier"
 else
   fail "Makefile should expose verify-full-sequential as a fallback path"
 fi
+
+# Test that sequential mode actually skips parallel fanout
+fake_seq_output="$(
+  cd "$tmp" && \
+  PATH="$tmp/bin:$PATH" \
+  VERIFY_FORCE=1 \
+  VERIFY_LANE_MODE=sequential \
+  VERIFY_STAMP_SUBJECT="test-head" \
+  VERIFY_CHANGED_FILES_FILE="$changed" \
+  VERIFY_FAKE_CARGO_LOG="$cargo_log" \
+  ./scripts/verify-local.sh full 2>&1
+)" || true
+if printf '%s\n' "$fake_seq_output" | grep -q '\[verify-local\] full: launching '; then
+  fail "sequential mode should not launch parallel lanes"
+else
+  pass "sequential mode bypasses parallel lane fanout"
+fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/hooks/test_verify_local.sh` around lines 365 - 369, Add a runtime
assertion that VERIFY_LANE_MODE=sequential actually triggers the sequential
codepath: invoke the same Makefile verification target with
VERIFY_LANE_MODE=sequential (e.g., run VERIFY_LANE_MODE=sequential make verify
or the repo's verification command used elsewhere in this script), capture
stdout/stderr, and assert the output indicates the sequential fallback (or the
verify-full-sequential target was run) and does not contain parallel/fanout
indicators; update the test around verify-full-sequential to run this command
and use the existing pass()/fail() helpers to fail if the output shows parallel
fanout or fails to show the sequential marker.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/hooks/test_verify_local.sh`:
- Around line 532-537: The test currently asserts exact lane ordering by
grepping for the string 'lanes=fmt guards hook-tests' against
fake_tooling_output; change this to assert presence of each expected lane
independently (e.g., check fake_tooling_output contains 'fmt', 'guards', and
'hook-tests') rather than exact sequence so the test is robust to reordering or
extra lanes—update the conditional around fake_tooling_output (the grep/check
logic) to test for each lane token separately and only fail if any expected
token is missing.

---

Duplicate comments:
In `@scripts/verify-local.sh`:
- Around line 333-340: The append_unique function and
filter_package_set_by_selection use "${array_ref[@]}" which under set -u on bash
<4.4 can raise an unbound variable error when the array is empty; change the
expansions to the portable form ${array_ref[@]+"${array_ref[@]}"} (and the
analogous ${arr[@]+"${arr[@]}"} in filter_package_set_by_selection) so empty
arrays don't trigger unbound-variable errors while preserving normal expansion
behavior.
- Around line 597-599: The helper should_run_parallel_lanes currently treats any
non-"parallel" VERIFY_LANE_MODE as sequential silently; update it to validate
VERIFY_LANE_MODE explicitly and emit a clear warning or error on unknown values:
in the should_run_parallel_lanes function check for allowed values ("parallel"
and "sequential"), return true only for "parallel", return false for
"sequential", and for any other value log a warning (or exit with non-zero)
indicating an invalid VERIFY_LANE_MODE so typos/configuration mistakes are
surfaced; reference the VERIFY_LANE_MODE variable and should_run_parallel_lanes
function when making this change.
- Around line 601-656: The run_parallel_lanes function can leak child processes
and the temporary logdir on SIGINT/SIGTERM; add a signal trap that, when
triggered, iterates over lane_pids to kill (and wait for) each background PID,
prints which lane was interrupted using lane_names, cleans up the logdir, and
then exits with a non-zero status; ensure you set the trap after logdir is
created and before launching children and restore/clear the trap before the
normal exit path so normal completion still performs the existing rm -rf
"$logdir".
- Around line 532-540: The nextest branch unconditionally adds "--lib --tests"
causing failures for bin-only crates; change the cargo nextest invocation in the
use_nextest true branch to compute test args the same way as the non-nextest
branch by calling targeted_test_args_for_crate "$crate" (or reusing its output)
and pass those args to cargo +"$PINNED" nextest run -p "$crate" instead of
hardcoding --lib --tests; ensure you preserve echo/logging of the exact command
and use "${test_args[@]}" when invoking nextest so bin-only crates won't receive
invalid flags.

In `@tests/hooks/test_verify_local.sh`:
- Around line 365-369: Add a runtime assertion that VERIFY_LANE_MODE=sequential
actually triggers the sequential codepath: invoke the same Makefile verification
target with VERIFY_LANE_MODE=sequential (e.g., run VERIFY_LANE_MODE=sequential
make verify or the repo's verification command used elsewhere in this script),
capture stdout/stderr, and assert the output indicates the sequential fallback
(or the verify-full-sequential target was run) and does not contain
parallel/fanout indicators; update the test around verify-full-sequential to run
this command and use the existing pass()/fail() helpers to fail if the output
shows parallel fanout or fails to show the sequential marker.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 4e31738d-4f61-4631-aea6-6b7fd79d9e67

📥 Commits

Reviewing files that changed from the base of the PR and between c52ad30 and 515e41c.

📒 Files selected for processing (4)
  • CHANGELOG.md
  • scripts/hooks/README.md
  • scripts/verify-local.sh
  • tests/hooks/test_verify_local.sh

Comment on lines +532 to +537
if printf '%s\n' "$fake_tooling_output" | grep -q 'lanes=fmt guards hook-tests'; then
pass "tooling-only full verification runs hook regression coverage"
else
fail "tooling-only full verification should run hook regression coverage"
printf '%s\n' "$fake_tooling_output"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Lane order assertion is fragile.

Line 532 checks for exact string 'lanes=fmt guards hook-tests'. If lane order changes or additional lanes are added, this test breaks silently or produces confusing failures. Consider checking for lane presence rather than exact ordering, or document the expected canonical order.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/hooks/test_verify_local.sh` around lines 532 - 537, The test currently
asserts exact lane ordering by grepping for the string 'lanes=fmt guards
hook-tests' against fake_tooling_output; change this to assert presence of each
expected lane independently (e.g., check fake_tooling_output contains 'fmt',
'guards', and 'hook-tests') rather than exact sequence so the test is robust to
reordering or extra lanes—update the conditional around fake_tooling_output (the
grep/check logic) to test for each lane token separately and only fail if any
expected token is missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant