Add client_id to progress_text binary WS messages#12540
Add client_id to progress_text binary WS messages#12540christian-byrne wants to merge 5 commits intomasterfrom
Conversation
f817b3f to
0f9e15f
Compare
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis PR adds a server feature flag 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan for PR comments
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_api_nodes/util/client.py`:
- Around line 449-452: The lazy import of get_executing_context inside the block
that calls PromptServer.instance.send_progress_text triggers the pipeline linter
import-outside-toplevel warning; either move the import to the top of the module
(add "from comfy_execution.utils import get_executing_context" alongside other
imports) and remove the in-function import so the call in the code uses the
top-level symbol, or if that creates a circular import, keep the lazy import but
add a pylint disable comment (e.g., # pylint: disable=import-outside-toplevel)
immediately above the local import to acknowledge the intentional pattern;
update the code around get_executing_context and
PromptServer.instance.send_progress_text accordingly.
0f9e15f to
d971f28
Compare
|
Getting the prompt_id inside the node is a bit ugly at the moment. Maybe this is worth adding |
Add supports_progress_text_metadata feature flag and extend send_progress_text() to accept optional prompt_id param. When prompt_id is provided and the client supports the new format, the binary wire format includes a length-prefixed prompt_id field: [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] Legacy format preserved for clients without the flag. Both callers (nodes_images.py, client.py) updated to pass prompt_id from get_executing_context(). Part of COM-12671: parallel workflow execution support. Amp-Thread-ID: https://ampcode.com/threads/T-019c79f7-f19b-70d9-b662-0687cc206282
- Add PROMPT_ID as a new hidden type in the Hidden enum, HiddenHolder, HiddenInputTypeDict, and execution engine resolution (both V3 and legacy) - Refactor GetImageSize to use cls.hidden.prompt_id instead of manually calling get_executing_context() — addresses reviewer feedback - Remove lazy import of get_executing_context from nodes_images.py - Add docstrings to send_progress_text, _display_text, HiddenHolder, and HiddenHolder.from_dict Amp-Thread-ID: https://ampcode.com/threads/T-019ca1cb-0150-7549-8b1b-6713060d3408
d971f28 to
83df2a8
Compare
|
We actually need to be sending client_id AND prompt_id, updating PR now. |
- Default sid to self.client_id when not explicitly provided, matching every other WS message dispatch (executing, executed, progress_state, etc.) - Previously sid=None caused broadcast to all connected clients - Format signature per ruff, remove redundant comments - Add unit tests for routing, legacy format, and new prompt_id format Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e
Copy-paste stub tests don't verify the real implementation and add maintenance burden without meaningful coverage. Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e
When prompt_id is None, encode as zero-length string instead of falling back to old format. Prevents binary parse corruption on the frontend. Addresses review feedback: #12540 (comment)
## Summary Add frontend support for `prompt_id` in `progress_text` binary WS messages, enabling parallel workflow execution to route progress text to the correct active prompt. Backend PR: Comfy-Org/ComfyUI#12540 ## Changes - Advertise `supports_progress_text_metadata` in client feature flags - Decode `prompt_id` from new binary format when flag is active - Add optional `prompt_id` to `zProgressTextWsMessage` schema - Filter `progress_text` events by `activePromptId` — skip messages for non-active prompts ## Deployment Can be deployed independently in any order — feature flag negotiation ensures graceful degradation. Part of COM-12671 ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9002-Add-prompt_id-support-to-progress_text-WS-messages-30d6d73d365081acabbcdac939e0c751) by [Unito](https://www.unito.io) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Progress text updates can include optional metadata so richer context is available. * **Bug Fixes / Improvements** * Progress updates are now filtered to show only the currently active prompt, reducing cross-talk from concurrent operations and improving update accuracy. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: GitHub Action <action@github.com>
| message = ( | ||
| struct.pack(">I", len(prompt_id_bytes)) | ||
| + prompt_id_bytes | ||
| + struct.pack(">I", len(node_id_bytes)) | ||
| + node_id_bytes | ||
| + text | ||
| ) |
There was a problem hiding this comment.
I think we want to avoid binary encoding whenever possible to avoid needing to support a ton of different encodings. IMO we should use the same pattern we use for previews where the format is:
headerLength (4 bytes)
headerJson (headerLength bytes)
payload
Then the headerJson would just look like
{
"prompt_id": "34522-43453-34534-34534",
"node_id": "37",
"text": "Foo bar baz"
}
That allows us to add additional information to the header in the future without supporting a binary version we need to support in perpetuity.
There was a problem hiding this comment.
Wait a minute, why do we use a binary message at all? This is just sending text, right? We're not sending a binary image or anything where encoding it would bloat the message. Why not just send this websocket message as json text?
There was a problem hiding this comment.
Yeah I can convert to json. Only real downside is the LoC for this PR and the need to then change frontend PR and have very different branches for old vs. new format
* Add prompt_id to progress_text binary WS messages Add supports_progress_text_metadata feature flag and extend send_progress_text() to accept optional prompt_id param. When prompt_id is provided and the client supports the new format, the binary wire format includes a length-prefixed prompt_id field: [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] Legacy format preserved for clients without the flag. Both callers (nodes_images.py, client.py) updated to pass prompt_id from get_executing_context(). Part of COM-12671: parallel workflow execution support. Amp-Thread-ID: https://ampcode.com/threads/T-019c79f7-f19b-70d9-b662-0687cc206282 * refactor: add prompt_id as hidden type, fix imports, add docstrings - Add PROMPT_ID as a new hidden type in the Hidden enum, HiddenHolder, HiddenInputTypeDict, and execution engine resolution (both V3 and legacy) - Refactor GetImageSize to use cls.hidden.prompt_id instead of manually calling get_executing_context() — addresses reviewer feedback - Remove lazy import of get_executing_context from nodes_images.py - Add docstrings to send_progress_text, _display_text, HiddenHolder, and HiddenHolder.from_dict Amp-Thread-ID: https://ampcode.com/threads/T-019ca1cb-0150-7549-8b1b-6713060d3408 * fix: send_progress_text unicasts to client_id instead of broadcasting - Default sid to self.client_id when not explicitly provided, matching every other WS message dispatch (executing, executed, progress_state, etc.) - Previously sid=None caused broadcast to all connected clients - Format signature per ruff, remove redundant comments - Add unit tests for routing, legacy format, and new prompt_id format Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e * remove send_progress_text stub tests Copy-paste stub tests don't verify the real implementation and add maintenance burden without meaningful coverage. Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e * fix: always send new binary format when client supports feature flag When prompt_id is None, encode as zero-length string instead of falling back to old format. Prevents binary parse corruption on the frontend. Addresses review feedback: Comfy-Org#12540 (comment) --------- Co-authored-by: bymyself <cbyrne@comfy.org>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
## Problem API node generation status text (sent via `progress_text` WebSocket binary messages) was not showing on local ComfyUI, but worked on cloud. ## Root Cause The binary decoder for `progress_text` messages (eventType 3) checked `getClientFeatureFlags()?.supports_progress_text_metadata` — the **client's own flags** — to decide whether to parse the new format with `prompt_id`. Since the client always advertises `supports_progress_text_metadata: true`, it always tried to parse the new wire format: ``` [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] ``` But the backend PR that adds `prompt_id` to the binary message ([ComfyUI#12540](Comfy-Org/ComfyUI#12540)) was **closed without merging**, so local ComfyUI still sends the legacy format: ``` [4B event_type][4B node_id_len][node_id][text] ``` The decoder misinterpreted the `node_id_len` as `prompt_id_len`, consuming the actual node_id bytes as a prompt_id, then producing garbled `nodeId` and `text` — silently dropping all progress text updates via the catch handler. Cloud worked because the cloud backend supports and echoes the feature flag. ## Fix One-line change: check `serverFeatureFlags.value` (what the server echoed back) instead of `getClientFeatureFlags()` (what the client advertises). ## Tests Added 3 tests covering: - Legacy format parsing when server doesn't support the flag - New format parsing when server does support the flag - Corruption regression test: client advertises support but server doesn't ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-10996-fix-check-server-feature-flags-for-progress_text-binary-format-33d6d73d365081449a0dc918358799de) by [Unito](https://www.unito.io) --------- Co-authored-by: GitHub Action <action@github.com>
Summary
Add
prompt_idtoprogress_textbinary WS messages (eventType 3) for parallel workflow execution support.Changes
supports_progress_text_metadatafeature flag toSERVER_FEATURE_FLAGSsend_progress_text()to accept optionalprompt_idparam with auto-resolution fromget_executing_context()andlast_prompt_idfallback[4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text]Wire Format
New (when
supports_progress_text_metadataflag is set):Legacy (unchanged):
Deployment
Can be deployed independently in any order — feature flag negotiation ensures graceful degradation.
Part of COM-12671
API Node PR Checklist
Scope
Pricing & Billing
If Need pricing update:
QA
Comms