PyIsolate Update: Production-Ready Process Isolation for ComfyUI#3
PyIsolate Update: Production-Ready Process Isolation for ComfyUI#3Kosinkadink merged 95 commits intoComfy-Org:mainfrom
Conversation
- Implemented activation scripts for bash, csh, and fish shells. - Created executable scripts for f2py, normalizer, numpy-config, pip, pip3, and python binaries. - Added pyvenv.cfg for virtual environment configuration. - Enhanced client and host modules to support path unification and environment snapshotting. - Introduced path_helpers for managing sys.path in child processes. - Added unit tests for path_helpers to ensure functionality and correctness.
- Implements host sys.path snapshot and child reconstruction - Enables isolated processes to import host modules (e.g., ComfyUI utils) - Adds build_child_sys_path() with smart filtering: * Prepends application root * Filters code subdirectories to prevent shadowing * Preserves host .venv/site-packages * Appends isolated venv packages - Adds serialize_host_snapshot() for environment capture - 18 unit tests with 100% coverage for path_helpers.py - Solves multiprocessing.spawn() import visibility problem Use case: ComfyUI custom nodes need both host modules (folder_paths, utils.*) and isolated dependencies (deepdiff) in same process. Tested with ComfyUI integration - all imports working correctly.
- Add --index-strategy unsafe-best-match for torch dev versions - Enables PyIsolate to work with torch nightlies (e.g., 2.9.0.dev+cu129) - Without this, uv fails when exact dev version unavailable on PyPI - Unblocks 100% of ComfyUI installations using torch nightlies Fixes issue where exact dev build versions don't exist on any index and uv refuses to install without fallback strategy enabled.
…client snapshot handling and extension safety
…nd logging in core components
…infinite recursion in ProxiedSingleton
… path unification
…n; ensure CUDA tensors are handled correctly
…; implement serialization hooks for ModelPatcher
…n; support sharing of PyTorch and optimize link mode handling
… of `logging` python library and those messages being properly transmitted via RPC
Implements CLIPRegistry (ProxiedSingleton) + CLIPProxy for isolated nodes. Handles unpicklable CLIP objects via RPC. Includes ScopedModelRegistry, full test coverage, and automatic CLIP→CLIPRef serialization in execution flow. Deletes rejected logging_proxy.py.
Replace --system-site-packages with site.addsitedir() injection to enable proper parent→child venv inheritance while blocking system Python packages. Architecture: - Child venvs created WITHOUT --system-site-packages (blocks /usr packages) - Write _pyisolate_parent.pth into child's site-packages on creation - .pth file invokes site.addsitedir(parent_site) for recursive .pth processing - Ensures namespace packages and C extensions initialize correctly
…treamline editable and local path handling in dependencies
Problem:
On Windows, multiprocessing.Queue() uses semaphores that cannot be
inherited by child processes when set_executable() changes the Python
interpreter to a different venv. This causes:
PermissionError: [WinError 5] Access is denied
Solution:
Replace direct Queue() with Manager().Queue() which uses TCP sockets
instead of semaphores, avoiding Windows handle inheritance issues.
Key changes in pyisolate/_internal/host.py:
- Use get_context("spawn") instead of set_start_method() for isolation
- Initialize Manager-based queues for cross-venv IPC compatibility
- Set PYISOLATE_CHILD=1 BEFORE Manager() to prevent ComfyUI re-init
- Add proper Manager.shutdown() cleanup in Extension.stop()
- Add os.name=="nt" checks for Windows-specific paths:
- Scripts/python.exe vs bin/python
- Lib/site-packages vs lib/pythonX.Y/site-packages
- VIRTUAL_ENV environment variable injection
Linux compatibility:
- All changes are additive with proper else branches
- Manager queues work on both platforms (negligible overhead)
- Platform-agnostic env var guards
Tested on: Windows 11,
…ct and sequence subclasses, and add safety checks for unpicklable objects
…edundant debug statements
…pynvml deprecation warnings in child processes
…nd enhance proxy handling
… for compatibility with CUDA versions
…ls and path unification explanation
There was a problem hiding this comment.
Pull request overview
This PR introduces PyIsolate v1.0 with production-ready process isolation for ComfyUI. The major changes include full ComfyUI integration with path unification to ensure isolated child processes can access ComfyUI imports, a lazy loading and caching system for zero startup penalty, and Windows compatibility through Manager-based IPC queues. Additional improvements include torch ecosystem support with dynamic package discovery, stateless RPC implementation with callback support, tensor serialization safety features, logging deduplication, comprehensive documentation, and unit tests.
Key Changes:
- Path unification system ensures ComfyUI root is first in sys.path for child processes
- Lazy process spawning with node caching for instant subsequent loads
- Windows-compatible IPC using Manager-based queues instead of POSIX-only Queue
- Dynamic torch ecosystem package discovery for CUDA compatibility
- RPC callback support for bidirectional async communication
Reviewed changes
Copilot reviewed 25 out of 41 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_singleton_shared.py | New test file for singleton and proxied singleton behavior |
| tests/test_path_helpers.py | Tests for sys.path snapshot and reconstruction logic |
| tests/test_extension_safety.py | Tests for extension name normalization and dependency validation |
| tests/test_client_snapshot.py | Tests for client-side snapshot handling and path unification |
| tests/path_unification/test_path_helpers.py | Comprehensive path unification unit tests |
| tests/conftest.py | Added ComfyUI root to sys.path for test compatibility |
| pyproject.toml | Added asyncio_mode configuration for pytest |
| pyisolate/path_helpers.py | New module for host path context sharing and sys.path reconstruction |
| pyisolate/host.py | Lazy loading implementation and improved logging |
| pyisolate/config.py | Dynamic torch ecosystem package discovery |
| pyisolate/_internal/shared.py | RPC callback support and tensor serialization safety |
| pyisolate/_internal/model_serialization.py | Custom serialization for ComfyUI ModelPatcher objects |
| pyisolate/_internal/host.py | Windows Manager-based queues, lazy initialization, torch inheritance |
| pyisolate/_internal/client.py | Module-level path unification and RPC setup |
| comfy_hello_world/* | Example files demonstrating ComfyUI integration |
| benchmark_results_llamatron_*.txt | Benchmark output file |
| SETUP_GUIDE.md | New setup guide for repository |
| README.md | Updated documentation and quick start guide |
Comments suppressed due to low confidence (3)
tests/test_client_snapshot.py:1
- The hardcoded absolute path
/home/johnj/ComfyUIis user-specific and will not work on other machines or CI environments. Consider using a relative path, environment variable, or creating a mock ComfyUI structure in tmp_path for portability.
tests/test_extension_safety.py:1 - The test expects the environment variable to equal the string
"None", but the context manager sets it toNone(the Python object). WhenNoneis passed toos.environ[key] = str(value), it becomes the string"None". This is inconsistent with the comment on line 70 saying "unsets_when_none". Either the test name is misleading or the behavior should actually remove the key when None is passed.
tests/conftest.py:1 - The hardcoded absolute path
/home/johnj/ComfyUIis user-specific. This will break tests on other developer machines and CI environments. Consider using an environment variable with a fallback, or detect the path dynamically.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
pyisolate/_internal/host.py
Outdated
| valid_parents = [p for p in sys.path if "site-packages" in p and p.startswith(host_prefix)] | ||
|
|
||
| if not valid_parents: | ||
| raise RuntimeError("Could not determine parent site-packages path to inherit") |
There was a problem hiding this comment.
The error message does not provide actionable information for debugging. Consider including details about what paths were checked in valid_parents and what host_prefix was, to help users understand why the detection failed.
| raise RuntimeError("Could not determine parent site-packages path to inherit") | |
| raise RuntimeError( | |
| f"Could not determine parent site-packages path to inherit.\n" | |
| f"host_prefix: {host_prefix}\n" | |
| f"parent_sites: {parent_sites}\n" | |
| f"valid_parents: {valid_parents}" | |
| ) |
There was a problem hiding this comment.
Fixed. Error at host.py:498-502 now includes host_prefix, site_packages, valid_parents, and candidates for debugging.
| include-system-site-packages = false | ||
| version = 3.12.3 | ||
| executable = /usr/bin/python3.12 | ||
| command = /home/johnj/pyisolate/.venv/bin/python -m venv --copies /home/johnj/pyisolate/comfy_hello_world/node-venvs/simple_text_node |
There was a problem hiding this comment.
This generated venv configuration file contains hardcoded user-specific paths (/home/johnj/). These files should typically be excluded from version control via .gitignore as they are environment-specific and will not work on other machines.
SETUP_GUIDE.md
Outdated
| ## 2. Bootstrap the repository | ||
|
|
||
| ```bash | ||
| cd /home/johnj/pyisolate |
There was a problem hiding this comment.
The setup guide uses the hardcoded path /home/johnj/pyisolate which is user-specific. Replace with a generic placeholder like cd /path/to/pyisolate or cd ~/pyisolate to make the guide applicable to all users.
There was a problem hiding this comment.
Fixed. Added comfy_hello_world/node-venvs/ to .gitignore; directory removed from repo.
There was a problem hiding this comment.
Fixed. SETUP_GUIDE.md removed entirely; documentation consolidated in README.md.
…applying network isolation
…onditions and `cudaErrorMapBufferObjectFailed`.
- Replace Manager().Queue() with threading.Queue() for Windows to avoid issues with __main__ re-import. - Introduce socket utilities for platform-agnostic IPC socket path management. - Update UDS client to support both UDS and TCP connections. - Modify tests to account for platform-specific behavior and improve security checks.
|
Hey, @guill - I promised no more updates because I thought the additional changes to re-enable Windows was going to take the code out-of-scope for your review, but the changes for full-Windows compatibility with the isolation part of the new refactor was trivial at <100 lines of code, and enabled all of Windows isolation so I thought it was a reasonable trade-off. I don't see any other places I plan on touching pyisolate code again unless stability or corner cases reveal a weakness that needs to get patched. |
…ting code-base status and functionality.
📝 WalkthroughWalkthroughReplaces in-process queue RPC with a socket/TCP JSON-RPC IPC, adds adapter-driven isolation and bubblewrap sandboxing, extensive (de)serialization including tensor/CUDA IPC, registry APIs for adapters/serializers, uv-based venv/dependency tooling, new RPC transports/protocol, and a large new test and benchmark harness surface. No public API removals except legacy in-process RPC file deletions. Changes
Sequence Diagram(s)sequenceDiagram
actor Host as Host Process
participant Transport as IPC Transport (UDS/TCP)
participant Child as Child Process
participant Adapter as AdapterRegistry
participant Module as Extension Module
Host->>Host: prepare bootstrap snapshot
Host->>Transport: create/listen IPC socket
Host->>Host: register tensor serializers
Host->>Child: spawn child (bwrap? + python -m uds_client)
Child->>Transport: connect to host socket
Host->>Child: send bootstrap JSON payload
Child->>Child: store PYISOLATE_HOST_SNAPSHOT
Child->>Child: bootstrap_child() -> apply sys.path
Child->>Adapter: _rehydrate_adapter(adapter_ref) (if present)
Adapter-->>Child: adapter instance
Child->>Adapter: adapter.setup_child_environment(snapshot)
Adapter->>Child: adapter.register_serializers(registry)
Child->>Child: create AsyncRPC(transport)
Child->>Module: load target module (__init__.py)
Module->>Child: on_module_loaded()
Child->>Child: rpc.run_until_stopped()
Host->>Host: get_proxy() (lazy)
Host->>Transport: send RPC call
Transport->>Child: deliver RPC request
Child->>Module: invoke method
Module->>Child: return result
Child->>Transport: serialize & send response
Transport->>Host: deliver response
Host->>Host: deserialize & return proxy result
sequenceDiagram
actor CLI as Manager/CLI
participant Env as Env & Venv tooling
participant Sandbox as Sandbox Detector
participant Host as Host Process
participant Child as Child Process
CLI->>Env: normalize_extension_name(), build_extension_snapshot()
CLI->>Env: create_venv(), exclude_satisfied_requirements(), install_dependencies()
CLI->>Sandbox: detect_sandbox_capability()
alt sandbox supported
CLI->>Host: build_bwrap_command(...)
Host->>Child: exec(bwrap + uds_client)
else fallback
Host->>Child: exec(python -m uds_client)
end
Host->>Host: create IPC transport & listener
Host->>Child: send bootstrap payload
Child->>Child: bootstrap_child() and load module
Host->>Host: create/get proxy and perform RPC calls as needed
🚥 Pre-merge checks | ❌ 1❌ Failed checks (1 warning)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 86
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
38-101: Duplicate "Quick Start" heading detected.There are two
## Quick Startheadings in the document (lines 38 and 101). This creates confusing navigation and may cause issues with anchor links. Consider renaming one of them (e.g., "Quick Start" and "Usage Examples" or "Basic Usage").
🤖 Fix all issues with AI agents
In `@benchmarks/benchmark_harness.py`:
- Around line 100-104: The get_manager method has inconsistent indentation: the
line that constructs config (ExtensionManagerConfig(...)) is over-indented; fix
the indentation so the config assignment is aligned with the if-block body
(consistent 4-space indentation under the if) and ensure self.manager =
ExtensionManager(...) is at the same indentation level within get_manager;
verify symbols get_manager, manager, ExtensionManagerConfig, extension_base_cls,
and ExtensionManager are unchanged.
- Around line 1-9: The import "contextmanager" is unused in this module—remove
it from the import list at the top of the file (the line that currently reads
"from contextlib import contextmanager") so only used imports remain; update the
imports block accordingly and run linters/tests to ensure no remaining
references to contextmanager in functions or classes such as any in this
benchmark harness.
- Around line 69-98: The load_extensions method is incomplete (contains a pass
and returns an empty list) and should not silently succeed; replace the
placeholder with a clear NotImplementedError explaining that load_extensions on
the BenchmarkHarness is not implemented and instructing callers to either use
ExtensionManager directly (self.manager) or to provide full ExtensionConfig
objects for loading, referencing the method name load_extensions, the class's
ExtensionManager/ExtensionManagerConfig usage, and the
test_root/create_extension pattern so maintainers know what to implement later.
In `@benchmarks/benchmark.py`:
- Around line 261-268: The code currently constructs the TypedDict by calling
ExtensionConfig(...) — change this to create a dict literal with a type
annotation instead: declare config: ExtensionConfig = { ... } and populate the
same keys (name, module_path, isolated, dependencies, apis, share_torch) so the
variable config is clearly typed as ExtensionConfig; update the block that
assigns to config (the variable named config and the TypedDict type
ExtensionConfig) accordingly while preserving the existing values and
expressions (e.g., str(harness.test_root / "extensions" / name), deps,
[DatabaseSingleton], share_torch).
- Around line 39-49: The DatabaseSingleton.__init__ currently initializes
self._db but doesn't call the parent initializer; update
DatabaseSingleton.__init__ to call super().__init__() (i.e., invoke
ProxiedSingleton's __init__) before or immediately after setting up self._db so
the SingletonMetaclass/ProxiedSingleton initialization runs correctly while
preserving the existing dictionary initialization.
- Around line 332-351: The script currently discards the integer returned by
run_benchmarks; update main() to capture the return value (e.g., result =
asyncio.run(run_benchmarks(...))) and return that result from main(), then
change the module entry to call sys.exit(main()) (import sys at top if missing)
so the process uses run_benchmarks' exit code; keep references to main(),
run_benchmarks, asyncio.run and sys.exit to locate and update the code.
In `@pyisolate/__init__.py`:
- Around line 16-32: The docstring example instantiates TypedDicts like
constructor calls (ExtensionConfig(...), ExtensionManagerConfig(...)) which is
unconventional; update the example to use dict-style literals for TypedDicts
(e.g., ExtensionConfig = {"name": ..., "module_path": ..., ...}) or otherwise
make the pattern consistent across docs; update the sample lines that reference
ExtensionConfig and ExtensionManagerConfig in the Basic Usage block and ensure
ExtensionManager(...) and ExtensionBase usage remains unchanged.
In `@pyisolate/_internal/bootstrap.py`:
- Around line 136-138: Fix the inconsistent indentation in the if-statement that
checks "if not adapter and adapter_ref": remove the extra leading spaces before
the comment and the raise line so the comment "# If we had info but failed to
load, that's an error" and the subsequent "raise ValueError(...)" align with the
surrounding block indentation (i.e., same indent level as the "if not adapter
and adapter_ref:" line) to maintain consistent formatting.
- Around line 131-138: The code logs exceptions from
_rehydrate_adapter(adapter_ref) but then raises a new generic ValueError, losing
the original exception context; fix by preserving the original exception when
raising the ValueError — capture the caught exception (e.g., assign exc to a
variable like rehydrate_exc inside the except for _rehydrate_adapter), and
change the later raise ValueError("Snapshot contained adapter info but adapter
could not be loaded") to raise ValueError("Snapshot contained adapter info but
adapter could not be loaded") from rehydrate_exc (or simply re-raise the
original exception) so the original stack trace and cause are preserved;
reference _rehydrate_adapter, adapter_ref, the except block variable (exc) and
the later ValueError raise site to locate the change.
In `@pyisolate/_internal/environment.py`:
- Around line 305-320: The code assumes torch is importable when
config["share_torch"] is False, which will raise ImportError if torch is
missing; wrap the import and version inspection in a try/except around the block
that sets torch_spec, common_args, and updates safe_deps (the logic touching
torch_spec, torch_version, cuda_version, common_args, and safe_deps) and on
ImportError raise a clear, user-facing error explaining that torch must be
installed locally when share_torch is False (or alternatively document this
requirement); ensure the exception message mentions config["share_torch"] and
the affected variables (torch_spec and safe_deps) so callers know why the
dependency is required.
In `@pyisolate/_internal/host.py`:
- Around line 456-458: The join method currently calls self.proc.join(), but
self.proc is a subprocess.Popen and Popen has no join; replace the call in def
join(self) -> None: to use self.proc.wait() (or self.proc.communicate() if
output handling is needed) and ensure proc is not None before calling; update
any tests or callers of Host.join to expect wait() semantics and reference the
join method and the proc attribute when making the change.
- Around line 163-164: Indentation on the os.environ assignment is off; inside
the block that checks self._cuda_ipc_enabled the line setting
os.environ["PYISOLATE_ENABLE_CUDA_IPC"] = "1" should be indented to the same
level as the if body (remove the extra leading space before os.environ) so it is
properly nested under the if; locate the check of self._cuda_ipc_enabled in
pyisolate/_internal/host.py and correct the indentation of the os.environ
assignment accordingly.
- Around line 366-377: Remove the leftover commented-out debug logging and the
duplicated comment block in pyisolate/_internal/host.py: delete the three
commented logger.error lines (the "[BWRAP-DEBUG] Launching CMD" debug block) and
remove the repeated duplicate of the "bwrap uses --setenv..." comment so only
one clear instance remains near the code that builds and launches the bwrap
command (look for references to cmd, Popen, and any logger usage in the launch
routine). Ensure the remaining comment about not passing env to the child is
kept concise and singular.
- Around line 293-300: The code uses deprecated tempfile.mktemp to create
uds_path in the has_af_unix() block (see ensure_ipc_socket_dir(), listener_sock,
and self._uds_path) which is vulnerable to TOCTOU; replace mktemp with a secure
unique path generator such as creating a temporary file via
tempfile.NamedTemporaryFile(delete=True) or generating a UUID-based filename
(e.g., using uuid.uuid4().hex) under run_dir, then close/unlink the tempfile (or
use the UUID path) before binding the socket, and ensure file permissions
(os.chmod) and assignment to self._uds_path remain unchanged.
In `@pyisolate/_internal/model_serialization.py`:
- Around line 109-117: The reference-dict branch currently calls the adapter
deserializer directly and returns its result without handling async
deserializers; update the branch that retrieves deserializer via
registry.get_deserializer(ref_type) to invoke deserializer(data), check if the
result is awaitable (e.g., hasattr(result, "__await__")), and if so await it
before returning, mirroring the async-handling logic used earlier in this module
so both synchronous and asynchronous deserializers (from
registry.get_deserializer / deserializer) are handled consistently.
- Around line 91-97: Add debug-level logging inside the broad except in the
async resolution block so failures to resolve remote objects are recorded; catch
the Exception as e and call a module logger (create a module-level logger via
logging.getLogger(__name__) if none exists) to log a debug message that includes
context (data.object_id and the exception) before returning the unresolved
handle, referencing extension.get_remote_object, resolved, data, and the
_pyisolate_remote_handle assignment so the log ties back to this function.
In `@pyisolate/_internal/remote_handle.py`:
- Around line 10-30: Add __slots__ to RemoteObjectHandle to reduce per-instance
memory: modify the class RemoteObjectHandle to declare __slots__ = ("object_id",
"type_name") while keeping the existing __module__ attribute for pickle
compatibility and leaving __init__ and __repr__ unchanged; ensure you reference
the class name RemoteObjectHandle and its attributes object_id and type_name
when making the change.
In `@pyisolate/_internal/rpc_protocol.py`:
- Around line 537-540: Replace the manual traceback.print_exc() call in the
except block that catches outer_exc (the "RPC Recv Thread CRASHED" handler) with
logger.exception(...) so the traceback is recorded via the logging framework
(include a descriptive message like "RPC Recv Thread CRASHED" and pass outer_exc
implicitly via logger.exception); make the analogous change in the exception
handler inside the _send_thread so both threads use logger.exception instead of
printing the traceback.
- Around line 278-281: The except clause around the loop retrieval and
call_soon_threadsafe (involving self._get_valid_loop, self.default_loop,
loop.call_soon_threadsafe, and self.blocking_future.set_result) is too broad;
replace the generic "except (RuntimeError, Exception)" with targeted exception
handling—e.g., catch RuntimeError and asyncio.InvalidStateError (import asyncio)
to handle invalid event loop or future state while allowing other exceptions to
surface; optionally log or silence only those specific exceptions instead of
swallowing all Exceptions.
- Around line 153-156: Replace the direct call to asyncio.get_event_loop() when
initializing self.default_loop with a safe pattern: attempt to get the running
loop via asyncio.get_running_loop() inside a try/except and if there is no
running loop either set self.default_loop to None (to defer acquisition until
used) or create a dedicated new loop via asyncio.new_event_loop(); update usage
sites to acquire or create a loop under self._loop_lock when needed so
default_loop isn’t assumed to exist. Specifically modify the code that sets the
attribute default_loop on the object (the self.default_loop assignment) and
ensure any code that uses default_loop acquires _loop_lock and initializes the
loop if it is None.
In `@pyisolate/_internal/rpc_serialization.py`:
- Around line 206-211: Remove the unused temporary variable _debug_rpc (assigned
from debug_all_messages) or replace its usage where intended; specifically
delete the line "_debug_rpc = debug_all_messages" and update any references to
use debug_all_messages directly (notably the debugprint usage), or if a separate
runtime-togglable flag was intended, implement and use that flag consistently
instead of leaving _debug_rpc unused (check symbol names debug_all_messages and
debugprint to locate call sites).
- Around line 24-27: Remove the empty TYPE_CHECKING block in
rpc_serialization.py: delete the conditional block that only contains "pass"
(the TYPE_CHECKING guard and its pass) since it serves no purpose; ensure no
other code relies on that block and run tests/type checks to confirm no import
cycles are affected.
- Around line 143-144: The type annotation for the __iter__ method is wrong:
change its return type from Iterable[str] to Iterator[str] and update imports
accordingly (e.g., add typing.Iterator or from typing import Iterator) so
__iter__(self) -> Iterator[str] returns iter(self._data); locate the method
named __iter__ and the attribute _data to make this change.
In `@pyisolate/_internal/rpc_transports.py`:
- Around line 374-391: The object reconstruction block (involving cls, data,
'cond', obj, property checks, setattr, __init__ and the fallback __new__ path)
should emit a debug log indicating which instantiation path succeeded so
failures are easier to trace; modify the try/except branches to call the logger
(e.g., process or module logger) with a clear message when returning from the
'cond' constructor, the no-arg constructor (after setting attributes, skipping
read-only properties via prop.fset), or the __new__-only/fallback path,
including the cls.__name__ and a short identifier from data to aid debugging.
- Line 326: Replace the f-string in the logger.warning call with %-style lazy
formatting: change the logger.warning invocation that currently uses f"Failed to
deserialize {type_name}: {e}" so it passes a format string and the variables as
separate arguments (use the logger.warning format "%s" placeholders and supply
type_name and e), e.g. call logger.warning with the format string and the two
arguments instead of interpolating with an f-string; update the logger.warning
line that references type_name and e accordingly.
- Around line 93-94: ConnectionTransport.recv currently calls self._conn.recv()
without synchronization while ConnectionTransport.send uses self._lock; to fix,
protect recv with a lock (either reuse self._lock or introduce a dedicated
self._recv_lock) so concurrent recv calls are serialized, i.e., acquire the lock
before calling self._conn.recv() and release after, and update any init code to
create the lock if needed; alternatively, if you prefer not to lock, add
explicit documentation on ConnectionTransport.recv that it must only be called
from a single thread.
- Around line 269-276: The code calls self._json_default(v) when json.dumps(v)
fails, which can cause infinite recursion for nested non-serializable objects;
modify _json_default to accept and track a recursion depth parameter (e.g.,
depth or _depth) and decrement/check it on each recursive call, returning a safe
fallback (like a string repr or {"__unserializable__": type_name}) when the
depth limit is reached; update all call sites including where serialized_dict[k]
= self._json_default(v) to pass the initial depth (e.g., max_depth) and ensure
_json_default handles attributes like __dict__ by recursion only while depth >
0.
- Around line 344-358: The current enum reconstruction block swallows all
exceptions; change the bare "except Exception: pass" to capture the exception
(e.g., "except Exception as e") and emit a debug log including module_name,
type_name, enum_name and the exception details before falling back to returning
dct.get('value'); ensure a logger is available (import logging and
getLogger(__name__) or use the module's existing logger) and include the
exception info (e.g., logger.debug(..., exc_info=True)) so failures to import
the module or retrieve the enum are recorded for troubleshooting.
In `@pyisolate/_internal/sandbox_detect.py`:
- Around line 143-171: The _test_bwrap function currently hardcodes paths (/lib,
/lib64, /bin, /usr, /usr/bin/true) which can be missing on some systems; modify
_test_bwrap to first probe for available candidates (e.g., prefer /usr if
present, fall back to /, check both /usr/bin/true and /bin/true via shutil.which
or os.path.exists, and only include --ro-bind entries for paths that exist) and
then build the subprocess.run argument list from those validated paths before
calling bwrap_path; update error handling to report which bind/path was missing
if the test fails.
In `@pyisolate/_internal/sandbox.py`:
- Around line 259-269: The list cuda_env_vars contains a duplicate entry
"PYISOLATE_ENABLE_CUDA_IPC"; remove the redundant occurrence so each environment
variable appears only once. Update the cuda_env_vars definition in
pyisolate/_internal/sandbox.py to delete the duplicated
"PYISOLATE_ENABLE_CUDA_IPC" entry, leaving a single instance to avoid redundant
checks or processing.
- Around line 148-155: The skip logic and comment around cuda_paths/cuda_path
and the cmd --ro-bind entries are inconsistent: update the condition to
consistently skip any path under /usr (e.g. use cuda_path.startswith("/usr/") )
if that bind will already be provided, and remove the special-casing that
excluded /usr/local; also update the inline comment to state clearly that any
/usr/* path is assumed covered by the /usr bind so we continue, otherwise append
the --ro-bind for that cuda_path to cmd. Ensure you modify the block that
iterates over cuda_paths and references cuda_path and cmd and keep the behavior
of adding cmd.extend(["--ro-bind", cuda_path, cuda_path]) only for non-/usr
paths.
In `@pyisolate/_internal/serialization_registry.py`:
- Around line 31-44: The register method in serialization_registry.py currently
leaves a previous deserializer in self._deserializers when re-registering a type
with a new serializer but no deserializer; update the register logic (method
register) to clear any stale deserializer for the given type_name when
deserializer is None by removing/popping self._deserializers[type_name] if
present, so the serializer and deserializer maps remain consistent.
In `@pyisolate/_internal/socket_utils.py`:
- Around line 34-43: ensure_ipc_socket_dir currently creates the IPC socket dir
but does not tighten permissions when the dir already exists; on non-Windows,
after computing socket_dir via get_ipc_socket_dir() in ensure_ipc_socket_dir,
check that socket_dir is an existing directory (Path.is_dir()), read its current
mode (os.stat or socket_dir.stat()), and if the permissions are more permissive
than 0o700, call socket_dir.chmod(0o700) to restrict them; ensure you only
perform this tightening on non-Windows platforms and preserve the existing
mkdir(parents=True, exist_ok=True, mode=0o700) behavior for creation while
adding the post-existence permission check and chmod when needed.
In `@pyisolate/_internal/tensor_serializer.py`:
- Around line 20-32: Remove the stray invalid token `dest="TensorKeeper",` that
appears just after the class docstring in the TensorKeeper class; this is
causing a SyntaxError at import. Open the TensorKeeper class and delete that
line so the class body contains only the docstring, __init__, and other valid
members (ensure __init__ remains properly indented and uses
self.retention_seconds, self._keeper, and self._lock as defined). Verify no
other stray tokens remain outside method definitions in TensorKeeper.
- Around line 89-95: The code path handling sfunc.__name__ ==
'rebuild_storage_fd' can recurse indefinitely if after calling
torch.multiprocessing.set_sharing_strategy('file_system') and t.share_memory_()
the subsequent call to _serialize_cpu_tensor(t) still leads back to
rebuild_storage_fd; add a one-time retry guard: modify _serialize_cpu_tensor to
accept an optional retry flag (e.g., tried_file_system=False) or use a local
sentinel so that when you call _serialize_cpu_tensor(t) the second time you
detect the retry and, if it still resolves to 'rebuild_storage_fd', raise a
RuntimeError instead of recursing; reference the sfunc.__name__ check,
torch.multiprocessing.set_sharing_strategy, t.share_memory_(), and
_serialize_cpu_tensor to implement the guard.
In `@pyisolate/_internal/uds_client.py`:
- Around line 150-152: The unconditional import of torch and call to
torch.multiprocessing.set_sharing_strategy('file_system') will raise ImportError
when torch is not installed; wrap the import and subsequent set_sharing_strategy
call in a try/except ImportError (or use importlib.import_module) so that when
torch is absent the code silently skips setting the sharing strategy (or logs a
debug message). Locate the import and call in the module (the lines that
reference torch and torch.multiprocessing.set_sharing_strategy) and ensure no
other code assumes torch is present when that import fails.
- Around line 219-222: Replace the two runtime asserts with explicit validation
and exceptions: check that module_spec is not None and that module_spec.loader
is not None, and if either is missing raise a clear exception (e.g., ImportError
or RuntimeError) with a descriptive message; update the code around the
module_spec and module_spec.loader checks in uds_client.py (the variables named
module_spec and module_spec.loader) to use if/raise instead of assert so the
validations remain in optimized runs.
In `@pyisolate/config.py`:
- Around line 19-53: The ExtensionConfig TypedDict currently marks
share_cuda_ipc, sandbox, and env as required which contradicts runtime checks;
change those three keys to NotRequired and import NotRequired from
typing_extensions (which is already used elsewhere) and update the sandbox type
from dict[str, Any] to SandboxConfig for reuse/clarity; update the
ExtensionConfig definition to use NotRequired for "share_cuda_ipc", "sandbox",
and "env" so mypy matches runtime behavior.
In `@pyisolate/path_helpers.py`:
- Around line 77-82: The hardcoded set assigned to code_subdirs (containing
"comfy","app","comfy_execution","utils") couples this helper to ComfyUI; change
the function that computes code_subdirs to accept an optional parameter (e.g.,
required_subdirs or code_subdirs_override) and use that list when provided,
falling back to the current default set for backwards compatibility; update
references that call the function to pass a custom list when detecting non-Comfy
roots and keep variable names root_norm and code_subdirs unchanged to ease
locating the change.
In `@README_COMFYUI.md`:
- Line 202: The README contains a broken internal link to
"#appendix-supported-apis" referenced in the line "See [Appendix: Supported
APIs](`#appendix-supported-apis`) for complete function lists."; either add a
matching section titled "Appendix: Supported APIs" (with anchor/id
appendix-supported-apis) containing the complete function lists referenced, or
remove/replace the link with a correct existing section reference or external
URL; update the README_COMFYUI.md accordingly and ensure the anchor text exactly
matches the link target.
- Line 244: The README references a non-existent anchor "#advanced-web-routes"
for the route_manifest.json workaround; either add an "Advanced: Web Routes"
section that documents route_manifest.json usage (including purpose, format, and
example) or change the inline link and heading text to point to an existing
section (or remove the anchor) so the link resolves; update the text that
currently says "see [Advanced: Web Routes](`#advanced-web-routes`)" to reference
the new or correct heading and ensure the heading name matches the link exactly.
- Line 306: Update the broken anchor link fragment from "#what-doesnt-work-yet"
to "#what-doesnt-work" in the README text (the string literal for the link
target) so it matches the actual heading; search for occurrences of
"#what-doesnt-work-yet" and replace them with "#what-doesnt-work" (e.g., the
link in the line containing "What Doesn't Work" reference).
- Around line 38-42: The README's installation step uses an incorrect command
("git install ."); update the instructions to use the correct installer by
replacing "git install ." with "pip install ." (or "pip3 install ." if your
environment requires it) in the cloned pyisolate instructions so users can
install the package successfully.
- Around line 82-117: The markdown code blocks showing logs (e.g. the block
starting with "][ ComfyUI-IsolationTest cache miss, spawning process for
metadata" and the blocks under "Expected logs - Reporting" and "Expected logs -
during workflow usage") lack language specifiers; update each triple-backtick
fence to include a language like "text" (or "bash" if preferred) so they become
```text ... ``` to enable proper syntax highlighting and readability throughout
README_COMFYUI.md.
In `@README.md`:
- Around line 79-80: Change the bare URL to a markdown link and fix the typo in
the referenced directory: replace the bare
"https://comfy-org.github.io/pyisolate/" with a markdown link like [Project
site](https://comfy-org.github.io/pyisolate/), and update the file path prefixes
from "mysolate/HELLO_WORLD.md" and "mysolate/GETTING_STARTED.md" to
"pyisolate/HELLO_WORLD.md" and "pyisolate/GETTING_STARTED.md" so the README
links render and point to the correct docs.
In `@tests/conftest.py`:
- Around line 45-47: The fixture is redundantly restoring
host_internal.Extension._Extension__launch manually after yield even though the
pytest monkeypatch fixture auto-reverts changes; remove the capturing of
original_launch and the post-yield monkeypatch.setattr call (the line that sets
_Extension__launch back to original_launch) and keep only the initial
monkeypatch.setattr(host_internal.Extension, "_Extension__launch", dummy_launch)
before yield so monkeypatch handles teardown automatically.
In `@tests/fixtures/test_adapter.py`:
- Around line 254-259: The deserializer passed to registry.register currently
returns the raw dict when the "__testdata__" marker is missing, which can hide
serialization bugs; update the deserializer lambda used in registry.register
(the deserializer for "MockTestData") to raise an explicit error (e.g.,
ValueError with a clear message referencing MockTestData and the missing marker)
instead of returning the input, and if you intentionally want to accept
malformed/backwards-compatible data, add a concise comment next to the
deserializer explaining that behavior and why it’s allowed.
- Around line 86-119: TestMockRegistryBehavior uses MockRegistry (which inherits
ProxiedSingleton) and shares singleton state across tests; add setup_method and
teardown_method to TestMockRegistryBehavior that clear
SingletonMetaclass._instances before and after each test to ensure isolation
(implement methods named setup_method and teardown_method that call
SingletonMetaclass._instances.clear(), referencing TestMockRegistryBehavior,
MockRegistry, ProxiedSingleton, and SingletonMetaclass._instances to locate the
right classes).
In `@tests/harness/host.py`:
- Around line 170-191: The cleanup method currently collects exceptions into
cleanup_errors but then discards them; update the cleanup method (async def
cleanup) to surface failures by either logging the aggregated errors or raising
an exception: after stopping extensions (ext.stop), unregistering
AdapterRegistry and calling self.temp_dir.cleanup(), if cleanup_errors is
non-empty build a single message from cleanup_errors (include which step
produced each error, e.g., ext.name or "temp_dir") and call the appropriate
logger (e.g., self.logger.error) or raise a
RuntimeError/RuntimeConcatenatedException with that message so callers are aware
of cleanup failures; ensure references to cleanup_errors, ext.stop,
AdapterRegistry.unregister and self.temp_dir.cleanup are used to produce
informative output.
- Around line 138-139: The conditional checking share_torch is a no-op and
should be removed or implemented: either delete the empty branch (and remove the
unused share_torch parameter from the surrounding function signature and
callers) or implement the intended sharing behavior for Torch (e.g., ensure
site-packages inheritance or create the necessary symlink/PYTHONPATH changes)
inside the same function where share_torch is referenced so the flag has an
effect; update callers and tests accordingly to reflect the chosen change.
In `@tests/harness/test_package/__init__.py`:
- Around line 27-30: The test initialization currently writes a marker into
sys.modules via sys.modules["_test_ext_initialized"] = True inside async def
initialize; replace this with a cleaner approach by setting a module-level
marker variable (e.g., _test_ext_initialized = True) or by registering the flag
in a dedicated test registry dictionary (e.g.,
test_registry["test_pkg_initialized"] = True) from within initialize; update any
test code that checks sys.modules["_test_ext_initialized"] to read the new
symbol (module-level _test_ext_initialized or the registry key) and remove the
sys.modules write.
In `@tests/integration_v2/debug_rpc.py`:
- Around line 6-8: The module-level logging configuration
(logging.basicConfig(...) and logging.getLogger("pyisolate").setLevel(...))
should not run at import time; move or guard it so it only applies during the
specific test run—e.g., relocate these calls into the test function that needs
debug output, or wrap them behind a conditional (if __name__ == "__main__") or
use pytest fixtures like caplog to configure logging per-test; update the code
to call logging.basicConfig and setLevel inside the targeted test function (or
via a fixture) so global logging state is not changed on module import.
In `@tests/integration_v2/test_isolation.py`:
- Around line 47-51: Remove the stray debug/developer notes in the test comments
inside tests/integration_v2/test_isolation.py (the block starting with "# Test
3: Write to module path..."). Clean up the commented lines so the test only
contains concise, relevant assertions or documentation—delete the "should be
allowed? or RO?", sandbox.py lookup notes, and "If I can't check, I'll test it."
comments, leaving a clear short comment explaining the intent of Test 3 or no
comment at all.
- Around line 39-43: The test currently catches a bare Exception around
proxy.write_file and sets write_succeeded, which is too broad; change the except
to only catch expected filesystem-related errors (e.g., except (PermissionError,
OSError) as e:) and set write_succeeded = False there, while allowing other
exceptions to propagate (or be explicitly re-raised) so unrelated failures
aren't swallowed; apply the same change to the second occurrence around
proxy.write_file and write_succeeded in this test.
- Around line 22-35: Remove the debug/thinking comments in
tests/integration_v2/test_isolation.py and replace them with a concise test
docstring or a single-line comment stating the test intent (verify child cannot
write to system paths). Delete speculative/user-specific notes like
"/home/johnj/pyisolate" and the step-by-step reasoning, and if helpful convert
the essential checks into a brief sentence (e.g., "attempt to write to /usr or
/etc/passwd to assert isolation") near the test function name so the purpose
remains clear but no development notes remain.
- Around line 12-55: The test creates a temporary sensitive file named
sensitive_path but never exercises access to it; either remove that unused setup
or change the test to attempt reads/writes against sensitive_path via the child
proxy (e.g., call await proxy.read_file(sensitive_path) / await
proxy.write_file(sensitive_path, "...") and assert access is denied), while
keeping the existing cleanup in the finally block; locate the setup using
sensitive_path and the child calls using proxy.write_file / proxy.read_file and
update the assertions accordingly (or delete the NamedTemporaryFile block if you
choose to drop that scenario).
In `@tests/integration_v2/test_lifecycle.py`:
- Around line 7-12: Fix the typo in the module docstring: change "actally" to
"actually" in the triple-quoted docstring at the top of test_lifecycle (the
module-level docstring in test_lifecycle.py) so the sentence reads "Extension
can actually accept a connection."
- Around line 25-43: The test function test_non_isolated_lifecycle is misnamed
because it calls reference_host.load_test_extension(..., isolated=True,
share_torch=False); either rename the test to match its behavior (e.g.,
test_isolated_without_torch_sharing) or change the call to isolated=False to
test non-isolated mode; update the function name or the isolated argument
accordingly and remove or adjust the commented uncertainty so the test intent
matches reference_host.load_test_extension and ext.get_proxy() usage.
In `@tests/integration_v2/test_tensors.py`:
- Around line 2-9: Replace the unconditional "import torch" with a guarded
import using pytest.importorskip so test collection won't fail when torch isn't
installed; specifically, change the top-level import of the symbol "torch" in
tests/integration_v2/test_tensors.py to use pytest.importorskip('torch')
(keeping the existing numpy guard and the HAS_NUMPY variable intact) so the
module-level reference to "torch" is assigned only when the dependency is
present.
In `@tests/path_unification/test_path_helpers.py`:
- Around line 44-45: The test currently opens the JSON output without specifying
an encoding (the lines using with open(output_path) as f: and json.load(f));
update the test to open the file with encoding="utf-8" so it matches the
implementation (i.e., use with open(output_path, encoding="utf-8") as f:)
ensuring json.load reads UTF-8 consistently.
- Around line 96-106: Test test_inserts_comfy_root_first_when_missing uses a
fallback comfy_root (Path.home() / "ComfyUI") that may not exist; instead create
a deterministic temporary comfy_root and ensure it differs from host before
calling build_child_sys_path. Modify the
test_inserts_comfy_root_first_when_missing to use a tmp_path (or tempfile) to
create a temporary directory for comfy_root (or set os.environ["COMFYUI_ROOT"]
to that temp path), ensure that directory exists and is not in host, then call
build_child_sys_path(host, extras, comfy_root) and assert comfy_root is
prepended; reference test_inserts_comfy_root_first_when_missing,
build_child_sys_path, comfy_root, and Path.home() when updating the test.
In `@tests/test_adapter_contract.py`:
- Around line 72-109: Add a pytest autouse fixture to ensure SerializerRegistry
is cleared before and after each test to prevent singleton state leakage: create
a fixture (e.g., clean_registry) that calls
SerializerRegistry.get_instance().clear() before yielding and clears again after
yield, and add it to the test module so tests that exercise
MockHostAdapter.register_serializers and registry.get_serializer always run
against a fresh registry instance.
- Around line 145-154: The test test_handle_api_registration_accepts_args
currently passes rpc = None which can hide integration issues; change the test
to create a realistic mock RPC (e.g., using unittest.mock.MagicMock with
spec=AsyncRPC) and pass that to MockHostAdapter.handle_api_registration
alongside MockRegistry so the adapter is validated against actual RPC
attributes/methods instead of None; add the appropriate import for MagicMock
(and AsyncRPC if needed) and ensure the rpc mock is used in the call to
adapter.handle_api_registration(api, rpc).
In `@tests/test_bootstrap.py`:
- Around line 79-84: Replace the obscure generator-based exception in the
test_bootstrap_missing_adapter test with a clear raising function: instead of
using "(_ for _ in ()).throw(ValueError('nope'))" set the monkeypatch for
bootstrap._rehydrate_adapter to a small lambda or def that simply raises
ValueError("nope") so the test remains the same (monkeypatch.setattr(bootstrap,
"_rehydrate_adapter", <callable that raises ValueError("nope")>)) and
readability is improved.
In `@tests/test_config_validation.py`:
- Around line 35-49: The test test_name_must_be_nonempty currently only asserts
the TypedDict value is an empty string but doesn't exercise runtime validation;
update the test to call the project's runtime validation function (e.g.,
validate_extension_config or whatever validates ExtensionConfig) with the config
and assert it raises/returns an error for an empty "name", or if no validator
exists, add a short note in the test that validation is intentionally deferred;
reference test_name_must_be_nonempty and ExtensionConfig to locate and update
the test to either invoke the validator and expect failure or document the
deferral.
- Around line 35-49: The test configs (e.g., inside test_name_must_be_nonempty)
are missing the required ExtensionConfig fields `sandbox` and `env`; update
every test config object to include "sandbox": {} and "env": {} so the TypedDict
with total=True is satisfied and static type checking passes, keeping the
existing values for name, module_path, isolated, dependencies, apis,
share_torch, and share_cuda_ipc unchanged.
- Around line 133-147: Update the test_share_cuda_ipc_requires_share_torch test
to verify enforcement by creating an ExtensionConfig with "share_torch": False
and "share_cuda_ipc": True and asserting the runtime validator rejects it; call
the project's extension validation routine (e.g., validate_extension_config or
the same validator used elsewhere in tests) and assert it raises an error
(ValueError/ValidationError) instead of only checking the dict values.
In `@tests/test_extension_lifecycle.py`:
- Around line 169-199: The two tests lack real assertions: update
test_extension_lifecycle_phases to assert meaningful behavior (e.g., validate
ExtensionConfig creation, venv creation, process launch, RPC execute, and
shutdown via the relevant helper functions or class methods used in your suite)
or replace the placeholder list check with calls that exercise the lifecycle
(use the same helpers that create ExtensionConfig, create/inspect venv,
start/stop the Extension process and perform an RPC call), and update
test_extension_stop_is_idempotent to actually call the Extension.stop() method
twice and assert no exception and that the extension is in a stopped state (or,
if you intend them purely as documentation, decorate both
test_extension_lifecycle_phases and test_extension_stop_is_idempotent with
pytest.mark.skip or convert them to module/class docstrings to avoid empty
tests).
- Around line 202-239: The tests under TestDependencyValidation only assert that
the ExtensionConfig TypedDict accepts string lists
(test_valid_dependency_format, test_empty_dependencies_allowed) which does not
exercise runtime validation; update these tests to call the actual
validation/runtime logic that enforces dependency formats instead (for example
the function that creates the venv or the config validator such as
validate_extension_config, create_extension_venv, or
ExtensionManager.validate_config) and assert it accepts the valid_deps list and
rejects malformed strings (and that empty list is allowed); replace the current
assertions with calls to that validator and corresponding positive/negative
assertions so the format enforcement is actually tested.
- Around line 78-95: The test
TestExtensionVenvPath::test_venv_path_includes_extension_name currently only
checks the config name and doesn't exercise the Extension venv path logic;
either update the test to construct an Extension (use the Extension class and
its venv path method/property, e.g., Extension(...) then call get_venv_path or
venv_path) and assert the returned path contains "my_extension", or rename the
test to something like test_declares_contract_placeholder to make it explicit
this is only documenting the config. Ensure you reference the Extension class
and the test_venv_path_includes_extension_name test in the change.
- Around line 113-120: The test contains a local import "import yaml" inside the
test body; move that import to the module-level imports at the top of
tests/test_extension_lifecycle.py so the module imports include "import yaml",
then remove the inner import from the test method (the assertions using
manifest["..."] remain unchanged) to ensure import errors surface at import time
and follow project import conventions.
In `@tests/test_host_integration.py`:
- Around line 43-51: The test test_build_extension_snapshot_no_adapter uses a
hardcoded path "/tmp/nowhere"; change it to accept the pytest tmp_path fixture
and call host.build_extension_snapshot with the temp directory (e.g.,
str(tmp_path) or tmp_path.as_posix()) so the test uses an isolated,
platform-independent temp dir. Update the test signature to def
test_build_extension_snapshot_no_adapter(tmp_path, monkeypatch): and replace the
literal path in the host.build_extension_snapshot call accordingly.
- Around line 30-40: Modify the test_build_extension_snapshot_includes_adapter
to accept the pytest tmp_path fixture (add tmp_path param), create the needed
directory structure under tmp_path (e.g., tmp_path / "ComfyUI" / "custom_nodes"
/ "demo"), and pass that dynamic path to host.build_extension_snapshot instead
of the hard-coded "/tmp/ComfyUI/custom_nodes/demo"; keep the monkeypatch on
AdapterRegistry.get to return FakeAdapter and update the final assertion to
compare snapshot["context_data"]["module_path"] to the string form of the
tmp_path demo directory. This targets the test function
test_build_extension_snapshot_includes_adapter and the call to
host.build_extension_snapshot so the test becomes cross-platform and isolated.
- Around line 5-28: The FakeAdapter class currently hard-codes "/tmp/ComfyUI" as
the default preferred_root which violates the PR requirement to avoid fixed
paths in tests; update FakeAdapter (constructor and get_path_config) to accept a
configurable preferred_root (remove or replace the "/tmp/ComfyUI" literal) so
tests can inject a fixture path (e.g., pytest tmp_path) instead of relying on a
hard-coded location, and adjust any tests creating FakeAdapter to pass tmp_path
/ tmp_path / "ComfyUI" as the preferred_root.
In `@tests/test_host_internal_ext.py`:
- Around line 183-185: The test hard-codes "python3.12" when creating
site-packages which will fail on other Python versions; change the site_packages
construction in tests/test_host_internal_ext.py to compute py_ver =
f"python{sys.version_info.major}.{sys.version_info.minor}" and use venv_path /
"lib" / py_ver / "site-packages" when assigning site_packages (sys is already
imported), keeping the existing venv_path and site_packages variable names.
In `@tests/test_host_public.py`:
- Around line 92-99: The test test_stop_all_extensions_calls_stop only checks
mgr.extensions is empty but doesn't verify that each extension's stop() was
invoked; update the test to attach a spy or mock object as the extension
implementation returned/registered by make_manager/load_extension (or replace
the extension instances in mgr.extensions after load_extension) and assert that
their stop() method was called when calling mgr.stop_all_extensions(); reference
the test function test_stop_all_extensions_calls_stop, manager instance mgr, the
load_extension and stop_all_extensions methods and the extensions registry to
locate where to inject the mocked extension and the stop() assertion.
In `@tests/test_path_helpers_contract.py`:
- Around line 54-65: The assertion in test_host_paths_preserved is too complex
and can yield false positives; simplify it by normalizing the host path and the
returned paths and asserting membership directly: call
path_helpers.build_child_sys_path in the test, build a set of normalized results
(e.g., {os.path.normpath(r) for r in result}), then for each host path assert
os.path.normpath(path) is in that set so comparisons are exact and readable.
- Around line 67-78: The test test_extra_paths_included currently only checks
length; change it to assert that the extra path(s) are actually present in the
returned result from path_helpers.build_child_sys_path — e.g., after calling
build_child_sys_path with host_paths=host and extra_paths=extra, assert that
each entry in the extra list appears in result (use direct equality or
substring/startswith checks if normalization occurs) instead of relying solely
on len(result) >= len(host). Ensure you reference the variables host, extra and
the function build_child_sys_path when updating the assertion.
In `@tests/test_remote_handle.py`:
- Around line 10-32: Add a pickle round-trip test in tests/test_remote_handle.py
to verify RemoteObjectHandle can be pickled and unpickled: import pickle, create
a RemoteObjectHandle instance, serialize with pickle.dumps and deserialize with
pickle.loads, and assert that the restored object's object_id and type_name
match the original (name the test e.g. test_remote_handle_pickle_roundtrip);
place it alongside the existing tests for
test_remote_handle_stores_id/type_name/repr.
In `@tests/test_rpc_contract.py`:
- Around line 134-178: Both tests (test_singleton_survives_loop_recreation and
test_singleton_data_persists_across_loops) can leak event loops if an assertion
fails before loop.close(); wrap the setup/use/assertions in try/finally blocks
so loop1.close() and loop2.close() always run. For each test, keep creating
loop1, set_event_loop(loop1), create MockRegistry/registrations and then in a
finally ensure loop1.close(); do the same around loop2 usage
(set_event_loop(loop2), assertions) ensuring loop2.close() in its finally;
reference the functions test_singleton_survives_loop_recreation,
test_singleton_data_persists_across_loops, variables loop1, loop2, and
registry/MockRegistry to locate where to add try/finally.
In `@tests/test_rpc_shutdown.py`:
- Around line 109-110: The two identical assertions checking that "Socket
closed" appears in debug_logs[0].message are duplicated; remove one of them so
only a single assertion remains (keep the assertion that checks "Socket closed"
in debug_logs[0].message) to eliminate the redundant check in
tests/test_rpc_shutdown.py where debug_logs is asserted.
- Around line 36-62: Remove the unused BlockingMockTransport test helper class
and its methods (class BlockingMockTransport, def __init__, def send, def recv,
def close) from the file so the test suite doesn't include dead code; simply
delete the entire class definition block where it is declared.
In `@tests/test_serialization_contract.py`:
- Around line 203-235: Replace the verbose attribute/callable checks in
TestSerializerProtocolCompliance.test_registry_matches_protocol with a single
runtime protocol check: create the registry via
SerializerRegistry.get_instance() and assert isinstance(registry,
SerializerRegistryProtocol); reference SerializerRegistry,
SerializerRegistryProtocol, and the test_registry_matches_protocol method to
locate and update the assertion accordingly.
In `@tests/test_serialization_registry.py`:
- Around line 1-31: Add an autouse pytest fixture in
tests/test_serialization_registry.py that calls
SerializerRegistry.get_instance().clear() before (and optionally after) each
test to ensure singleton state isolation; reference the SerializerRegistry class
and its get_instance() and clear() methods so the fixture obtains the singleton
and clears registered handlers to avoid test order leakage.
In `@tests/test_shared_additional.py`:
- Around line 56-86: Save the current event loop before creating a new one and
restore it after closing the new loop in the test(s) (e.g.,
test_async_rpc_send_thread_sets_exception_on_send_failure and its sibling);
specifically, capture old_loop = asyncio.get_event_loop() (or
asyncio.get_event_loop_policy().get_event_loop()), set the new loop with
asyncio.set_event_loop(loop), and after loop.close() call
asyncio.set_event_loop(old_loop) to restore the original event loop state.
In `@tests/test_torch_utils_additional.py`:
- Around line 6-16: The test currently calls
torch_utils.get_torch_ecosystem_packages.cache_clear() only at the start, which
can leave the lru_cache populated after the test and pollute other tests; ensure
the cache is cleared in teardown by invoking
torch_utils.get_torch_ecosystem_packages.cache_clear() in a finally block or
test fixture teardown so the cached value from the monkeypatched
importlib_metadata.distributions does not persist after the test finishes.
pyisolate/_internal/environment.py
Outdated
| torch_spec: Optional[str] = None | ||
| if not config["share_torch"]: | ||
| import torch | ||
| torch_version: str = str(torch.__version__) | ||
| if torch_version.endswith("+cpu"): | ||
| torch_version = torch_version[:-4] | ||
| cuda_version = torch.version.cuda # type: ignore[attr-defined] | ||
| if cuda_version: | ||
| common_args += [ | ||
| "--extra-index-url", | ||
| f"https://download.pytorch.org/whl/cu{cuda_version.replace('.', '')}", | ||
| ] | ||
| if "dev" in torch_version or "+" in torch_version: | ||
| common_args += ["--index-strategy", "unsafe-best-match"] | ||
| torch_spec = f"torch=={torch_version}" | ||
| safe_deps.insert(0, torch_spec) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the file structure and find the file
find . -name "environment.py" -type fRepository: Comfy-Org/pyisolate
Length of output: 182
🏁 Script executed:
# Read the environment.py file around lines 305-320 with more context
head -330 pyisolate/_internal/environment.py | tail -80Repository: Comfy-Org/pyisolate
Length of output: 182
🏁 Script executed:
# Let's see the full function that contains these lines
wc -l pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 182
🏁 Script executed:
# Read from line 280 to line 330 to see the full function context
sed -n '280,330p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 1982
🏁 Script executed:
# Check project dependencies and setup files
ls -la | grep -E "(setup|requirements|pyproject)"Repository: Comfy-Org/pyisolate
Length of output: 186
🏁 Script executed:
# Check pyproject.toml for torch dependency
cat pyproject.tomlRepository: Comfy-Org/pyisolate
Length of output: 4041
🏁 Script executed:
# Check setup.py
cat setup.pyRepository: Comfy-Org/pyisolate
Length of output: 302
🏁 Script executed:
# Look for where install_dependencies is called and what preconditions exist
rg "install_dependencies" --type py -B 3 -A 3Repository: Comfy-Org/pyisolate
Length of output: 4095
🏁 Script executed:
# Check if there's any documentation or comments about torch requirement
rg "share_torch" --type py -B 2 -A 2Repository: Comfy-Org/pyisolate
Length of output: 35351
🏁 Script executed:
# Check if there's any documentation about torch as a requirement
find . -name "README*" -o -name "*.md" | head -5Repository: Comfy-Org/pyisolate
Length of output: 155
🏁 Script executed:
# Look for any documentation mentioning torch requirements
rg "torch.*require|require.*torch" --type mdRepository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
# Check if there are any tests that use share_torch=False with dependencies that don't include torch
rg -B 5 -A 5 'share_torch.*False' tests/test_extension_lifecycle.py | grep -A 10 'dependencies'Repository: Comfy-Org/pyisolate
Length of output: 2548
🏁 Script executed:
# Look at the full install_dependencies function signature and early returns
sed -n '267,330p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 2625
🏁 Script executed:
# Check README for any guidance on torch requirement
head -100 README.mdRepository: Comfy-Org/pyisolate
Length of output: 3673
🏁 Script executed:
# Look at what happens in client.py with share_torch=False to understand the design
sed -n '1,50p' pyisolate/_internal/client.pyRepository: Comfy-Org/pyisolate
Length of output: 1797
🏁 Script executed:
# Find where torch is imported in client context
rg "import torch" pyisolate/_internal/ -B 2 -A 2Repository: Comfy-Org/pyisolate
Length of output: 4814
🏁 Script executed:
# Check if there are integration tests that use share_torch=False without dependencies requiring torch
rg -B 5 'share_torch.*False' tests/integration_v2/test_lifecycle.pyRepository: Comfy-Org/pyisolate
Length of output: 527
🏁 Script executed:
# Check what happens if there are no safe_deps when share_torch=False
# Look at the full logic flow more carefully
sed -n '267,365p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 4002
🏁 Script executed:
# Check if there's any documentation about when torch is required
rg -i "torch.*install|torch.*require" README.md CLAUDE.mdRepository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
# Look at the test that uses share_torch=False - does it have torch in host?
cat tests/integration_v2/test_lifecycle.py | head -50Repository: Comfy-Org/pyisolate
Length of output: 1640
🏁 Script executed:
# Check the test extension "no_torch_share" - does it have dependencies?
find . -path "*/no_torch_share*" -type fRepository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
# Look at the logic more carefully - when are dependencies required with share_torch=False?
# Search for where safe_deps gets populated
rg "safe_deps" pyisolate/_internal/environment.py -B 2 -A 2Repository: Comfy-Org/pyisolate
Length of output: 865
🏁 Script executed:
# Check if there's any guard or documentation about torch requirement when share_torch=False
rg -C 3 "share_torch.*False" pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
# Check if there's a "no_torch_share" test extension directory and what it contains
find . -type d -name "*no_torch*" -o -type f -name "*no_torch*"Repository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
# Let's check what test extensions exist
ls -la tests/extensions/ 2>/dev/null || find tests -maxdepth 2 -type d -name "extensions" -o -name "test_*"Repository: Comfy-Org/pyisolate
Length of output: 1022
🏁 Script executed:
# Search for any usage pattern that shows share_torch=False with dependencies
rg "share_torch.*False" -A 5 tests/ | grep -A 5 "dependencies"Repository: Comfy-Org/pyisolate
Length of output: 2505
🏁 Script executed:
# One more check: does the code ever handle ImportError for torch in environment.py?
rg "ImportError|import.*torch|try.*except" pyisolate/_internal/environment.py -B 2 -A 2Repository: Comfy-Org/pyisolate
Length of output: 432
🏁 Script executed:
# Check if there are any tests that specifically test share_torch=False WITH dependencies
rg "share_torch.*False" tests/ -A 10 | grep -B 5 -A 5 "dependencies.*\[" | head -40Repository: Comfy-Org/pyisolate
Length of output: 2209
🏁 Script executed:
# Look at test_install_dependencies tests to see what scenarios are tested
rg "test_install_dependencies" tests/test_host_internal_ext.py -A 15Repository: Comfy-Org/pyisolate
Length of output: 1303
🏁 Script executed:
# Verify the exact line numbers and early return
sed -n '290,310p' pyisolate/_internal/environment.py | cat -nRepository: Comfy-Org/pyisolate
Length of output: 977
🏁 Script executed:
# Check if there's any documentation about torch being required when share_torch=False
cat README.md | grep -i "torch" -C 3Repository: Comfy-Org/pyisolate
Length of output: 2168
🏁 Script executed:
# Final check: look for any comments in the code about this requirement
sed -n '267,330p' pyisolate/_internal/environment.py | grep -i "require\|assume\|need" -C 2Repository: Comfy-Org/pyisolate
Length of output: 758
Document torch requirement when share_torch=False with dependencies, or add graceful error handling.
The code assumes torch is installed in the host when share_torch=False and dependencies exist (line 307). This is needed to detect the torch version and CUDA configuration for proper wheel selection. The requirement is not documented, and if torch is missing, it will raise an unhandled ImportError.
Either document this as a requirement for users deploying with share_torch=False, or wrap the import with try-except to provide a clearer error message about the missing dependency.
🤖 Prompt for AI Agents
In `@pyisolate/_internal/environment.py` around lines 305 - 320, The code assumes
torch is importable when config["share_torch"] is False, which will raise
ImportError if torch is missing; wrap the import and version inspection in a
try/except around the block that sets torch_spec, common_args, and updates
safe_deps (the logic touching torch_spec, torch_version, cuda_version,
common_args, and safe_deps) and on ImportError raise a clear, user-facing error
explaining that torch must be installed locally when share_torch is False (or
alternatively document this requirement); ensure the exception message mentions
config["share_torch"] and the affected variables (torch_spec and safe_deps) so
callers know why the dependency is required.
| class RemoteObjectHandle: | ||
| """Handle to an object in a remote process. | ||
|
|
||
| This is a generic RPC concept - it represents a reference to an object | ||
| that lives in another process. The actual object can be fetched via | ||
| extension.get_remote_object(handle.object_id) on the receiving end. | ||
|
|
||
| Attributes: | ||
| object_id: Unique identifier for the remote object. | ||
| type_name: The type name of the remote object (for debugging/logging). | ||
| """ | ||
|
|
||
| # Preserve module identity for pickling compatibility | ||
| __module__ = 'pyisolate._internal.remote_handle' | ||
|
|
||
| def __init__(self, object_id: str, type_name: str) -> None: | ||
| self.object_id = object_id | ||
| self.type_name = type_name | ||
|
|
||
| def __repr__(self) -> str: | ||
| return f"<RemoteObject id={self.object_id} type={self.type_name}>" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM! Consider using __slots__ for memory efficiency.
The implementation is clean and correct. The hardcoded __module__ is appropriate for pickle compatibility. For a handle class that may be instantiated frequently, adding __slots__ can reduce memory overhead.
Optional: Add __slots__ for memory efficiency
class RemoteObjectHandle:
"""Handle to an object in a remote process.
...
"""
# Preserve module identity for pickling compatibility
__module__ = 'pyisolate._internal.remote_handle'
+ __slots__ = ('object_id', 'type_name')
def __init__(self, object_id: str, type_name: str) -> None:🤖 Prompt for AI Agents
In `@pyisolate/_internal/remote_handle.py` around lines 10 - 30, Add __slots__ to
RemoteObjectHandle to reduce per-instance memory: modify the class
RemoteObjectHandle to declare __slots__ = ("object_id", "type_name") while
keeping the existing __module__ attribute for pickle compatibility and leaving
__init__ and __repr__ unchanged; ensure you reference the class name
RemoteObjectHandle and its attributes object_id and type_name when making the
change.
| def _test_bwrap(bwrap_path: str) -> tuple[bool, str]: | ||
| """Test if bwrap actually works on this system. | ||
|
|
||
| Returns (success, error_message). | ||
| """ | ||
| try: | ||
| # S603: bwrap_path comes from shutil.which(), not user input | ||
| result = subprocess.run( # noqa: S603 | ||
| [ | ||
| bwrap_path, | ||
| "--unshare-user-try", | ||
| "--dev", "/dev", | ||
| "--proc", "/proc", | ||
| "--ro-bind", "/usr", "/usr", | ||
| "--ro-bind", "/bin", "/bin", | ||
| "--ro-bind", "/lib", "/lib", | ||
| "--ro-bind", "/lib64", "/lib64", | ||
| "/usr/bin/true", | ||
| ], | ||
| capture_output=True, | ||
| timeout=10, | ||
| ) | ||
| if result.returncode == 0: | ||
| return True, "" | ||
| return False, result.stderr.decode("utf-8", errors="replace") | ||
| except subprocess.TimeoutExpired: | ||
| return False, "bwrap test timed out" | ||
| except Exception as exc: | ||
| return False, str(exc) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hard-coded system paths in bwrap test may fail on some Linux distributions.
The _test_bwrap function assumes /lib, /lib64, /bin, and /usr/bin/true exist. Some minimal or non-standard Linux systems may have different layouts (e.g., merged /usr, no /lib64, or /bin/true instead of /usr/bin/true).
Consider making the test more resilient:
♻️ Suggested improvement for broader compatibility
def _test_bwrap(bwrap_path: str) -> tuple[bool, str]:
"""Test if bwrap actually works on this system.
Returns (success, error_message).
"""
try:
+ # Build bind paths dynamically based on what exists
+ bind_args = []
+ for path in ["/usr", "/bin", "/lib", "/lib64"]:
+ if os.path.exists(path):
+ bind_args.extend(["--ro-bind", path, path])
+
+ # Find a true binary that exists
+ true_binary = shutil.which("true") or "/usr/bin/true"
+
# S603: bwrap_path comes from shutil.which(), not user input
result = subprocess.run( # noqa: S603
[
bwrap_path,
"--unshare-user-try",
"--dev", "/dev",
"--proc", "/proc",
- "--ro-bind", "/usr", "/usr",
- "--ro-bind", "/bin", "/bin",
- "--ro-bind", "/lib", "/lib",
- "--ro-bind", "/lib64", "/lib64",
- "/usr/bin/true",
+ *bind_args,
+ true_binary,
],🤖 Prompt for AI Agents
In `@pyisolate/_internal/sandbox_detect.py` around lines 143 - 171, The
_test_bwrap function currently hardcodes paths (/lib, /lib64, /bin, /usr,
/usr/bin/true) which can be missing on some systems; modify _test_bwrap to first
probe for available candidates (e.g., prefer /usr if present, fall back to /,
check both /usr/bin/true and /bin/true via shutil.which or os.path.exists, and
only include --ro-bind entries for paths that exist) and then build the
subprocess.run argument list from those validated paths before calling
bwrap_path; update error handling to report which bind/path was missing if the
test fails.
| cuda_env_vars = [ | ||
| "CUDA_HOME", | ||
| "CUDA_PATH", | ||
| "CUDA_VISIBLE_DEVICES", | ||
| "NVIDIA_VISIBLE_DEVICES", | ||
| "LD_LIBRARY_PATH", | ||
| "PYTORCH_CUDA_ALLOC_CONF", | ||
| "TORCH_CUDA_ARCH_LIST", | ||
| "PYISOLATE_ENABLE_CUDA_IPC", | ||
| "PYISOLATE_ENABLE_CUDA_IPC", | ||
| ] |
There was a problem hiding this comment.
Duplicate entry in CUDA environment variable list.
PYISOLATE_ENABLE_CUDA_IPC appears twice in the cuda_env_vars list (lines 267-268). Remove the duplicate.
🔧 Suggested fix
cuda_env_vars = [
"CUDA_HOME",
"CUDA_PATH",
"CUDA_VISIBLE_DEVICES",
"NVIDIA_VISIBLE_DEVICES",
"LD_LIBRARY_PATH",
"PYTORCH_CUDA_ALLOC_CONF",
"TORCH_CUDA_ARCH_LIST",
"PYISOLATE_ENABLE_CUDA_IPC",
- "PYISOLATE_ENABLE_CUDA_IPC",
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cuda_env_vars = [ | |
| "CUDA_HOME", | |
| "CUDA_PATH", | |
| "CUDA_VISIBLE_DEVICES", | |
| "NVIDIA_VISIBLE_DEVICES", | |
| "LD_LIBRARY_PATH", | |
| "PYTORCH_CUDA_ALLOC_CONF", | |
| "TORCH_CUDA_ARCH_LIST", | |
| "PYISOLATE_ENABLE_CUDA_IPC", | |
| "PYISOLATE_ENABLE_CUDA_IPC", | |
| ] | |
| cuda_env_vars = [ | |
| "CUDA_HOME", | |
| "CUDA_PATH", | |
| "CUDA_VISIBLE_DEVICES", | |
| "NVIDIA_VISIBLE_DEVICES", | |
| "LD_LIBRARY_PATH", | |
| "PYTORCH_CUDA_ALLOC_CONF", | |
| "TORCH_CUDA_ARCH_LIST", | |
| "PYISOLATE_ENABLE_CUDA_IPC", | |
| ] |
🤖 Prompt for AI Agents
In `@pyisolate/_internal/sandbox.py` around lines 259 - 269, The list
cuda_env_vars contains a duplicate entry "PYISOLATE_ENABLE_CUDA_IPC"; remove the
redundant occurrence so each environment variable appears only once. Update the
cuda_env_vars definition in pyisolate/_internal/sandbox.py to delete the
duplicated "PYISOLATE_ENABLE_CUDA_IPC" entry, leaving a single instance to avoid
redundant checks or processing.
| class BlockingMockTransport(RPCTransport): | ||
| """Transport that allows controlling recv blocking.""" | ||
| def __init__(self): | ||
| self.recv_queue = asyncio.Queue() | ||
| self.closed = False | ||
|
|
||
| def send(self, obj): | ||
| pass | ||
|
|
||
| def recv(self): | ||
| # This will be called in a thread | ||
| if self.closed: | ||
| raise ConnectionError("Closed") | ||
| # Block until item available | ||
| # Since we can't easily block in a non-async way without | ||
| # actual threading primitives, we'll just simulate a quick | ||
| # loop check or similar. | ||
| # But actually, the RPC implementation calls transport.recv() | ||
| # which is synchronous. | ||
| import time | ||
| while not self.closed: | ||
| time.sleep(0.01) | ||
| raise ConnectionError("Closed during block") | ||
|
|
||
| def close(self): | ||
| self.closed = True | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
rg -n "BlockingMockTransport" -SRepository: Comfy-Org/pyisolate
Length of output: 135
Remove unused BlockingMockTransport class.
This helper is not referenced anywhere in the codebase and serves no purpose; removing it will keep the test suite lean.
🤖 Prompt for AI Agents
In `@tests/test_rpc_shutdown.py` around lines 36 - 62, Remove the unused
BlockingMockTransport test helper class and its methods (class
BlockingMockTransport, def __init__, def send, def recv, def close) from the
file so the test suite doesn't include dead code; simply delete the entire class
definition block where it is declared.
| assert "Socket closed" in debug_logs[0].message | ||
| assert "Socket closed" in debug_logs[0].message |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove duplicate assertion.
Line 109 and Line 110 assert the same condition; keep a single check to reduce noise.
🧹 Proposed cleanup
- assert "Socket closed" in debug_logs[0].message📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert "Socket closed" in debug_logs[0].message | |
| assert "Socket closed" in debug_logs[0].message | |
| assert "Socket closed" in debug_logs[0].message |
🤖 Prompt for AI Agents
In `@tests/test_rpc_shutdown.py` around lines 109 - 110, The two identical
assertions checking that "Socket closed" appears in debug_logs[0].message are
duplicated; remove one of them so only a single assertion remains (keep the
assertion that checks "Socket closed" in debug_logs[0].message) to eliminate the
redundant check in tests/test_rpc_shutdown.py where debug_logs is asserted.
| class TestSerializerProtocolCompliance: | ||
| """Tests for SerializerRegistryProtocol compliance.""" | ||
|
|
||
| def test_registry_matches_protocol(self): | ||
| """SerializerRegistry implements SerializerRegistryProtocol.""" | ||
|
|
||
| registry = SerializerRegistry.get_instance() | ||
|
|
||
| # Check protocol methods exist | ||
| assert hasattr(registry, "register") | ||
| assert hasattr(registry, "get_serializer") | ||
| assert hasattr(registry, "get_deserializer") | ||
| assert hasattr(registry, "has_handler") | ||
|
|
||
| # Check methods are callable | ||
| assert callable(registry.register) | ||
| assert callable(registry.get_serializer) | ||
| assert callable(registry.get_deserializer) | ||
| assert callable(registry.has_handler) | ||
|
|
||
| def test_register_signature(self): | ||
| """register() accepts type_name, serializer, and optional deserializer.""" | ||
| registry = SerializerRegistry.get_instance() | ||
| registry.clear() | ||
|
|
||
| # With deserializer | ||
| registry.register("Type1", lambda x: x, lambda x: x) | ||
|
|
||
| # Without deserializer | ||
| registry.register("Type2", lambda x: x) | ||
|
|
||
| assert registry.has_handler("Type1") | ||
| assert registry.has_handler("Type2") |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*.py" | xargs grep -l "SerializerRegistryProtocol" | head -20Repository: Comfy-Org/pyisolate
Length of output: 217
🏁 Script executed:
rg -n "class SerializerRegistryProtocol" --type=pyRepository: Comfy-Org/pyisolate
Length of output: 133
🏁 Script executed:
rg -n "@runtime_checkable" --type=pyRepository: Comfy-Org/pyisolate
Length of output: 317
🏁 Script executed:
head -30 pyisolate/interfaces.pyRepository: Comfy-Org/pyisolate
Length of output: 1090
🏁 Script executed:
head -50 tests/test_serialization_contract.py | grep -E "^import|^from"Repository: Comfy-Org/pyisolate
Length of output: 213
🏁 Script executed:
sed -n '1,20p' tests/test_serialization_contract.pyRepository: Comfy-Org/pyisolate
Length of output: 683
🏁 Script executed:
rg "SerializerRegistryProtocol" tests/test_serialization_contract.pyRepository: Comfy-Org/pyisolate
Length of output: 193
🏁 Script executed:
sed -n '15,35p' pyisolate/interfaces.pyRepository: Comfy-Org/pyisolate
Length of output: 813
🏁 Script executed:
sed -n '203,235p' tests/test_serialization_contract.pyRepository: Comfy-Org/pyisolate
Length of output: 1260
Use isinstance() check with SerializerRegistryProtocol for cleaner protocol compliance verification.
Since SerializerRegistryProtocol is decorated with @runtime_checkable, using isinstance(registry, SerializerRegistryProtocol) replaces the multiple hasattr/callable checks with a single, more idiomatic assertion.
Proposed enhancement
+from pyisolate.interfaces import SerializerRegistryProtocol
+
def test_registry_matches_protocol(self):
"""SerializerRegistry implements SerializerRegistryProtocol."""
-
registry = SerializerRegistry.get_instance()
- # Check protocol methods exist
- assert hasattr(registry, "register")
- assert hasattr(registry, "get_serializer")
- assert hasattr(registry, "get_deserializer")
- assert hasattr(registry, "has_handler")
-
- # Check methods are callable
- assert callable(registry.register)
- assert callable(registry.get_serializer)
- assert callable(registry.get_deserializer)
- assert callable(registry.has_handler)
+ assert isinstance(registry, SerializerRegistryProtocol)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class TestSerializerProtocolCompliance: | |
| """Tests for SerializerRegistryProtocol compliance.""" | |
| def test_registry_matches_protocol(self): | |
| """SerializerRegistry implements SerializerRegistryProtocol.""" | |
| registry = SerializerRegistry.get_instance() | |
| # Check protocol methods exist | |
| assert hasattr(registry, "register") | |
| assert hasattr(registry, "get_serializer") | |
| assert hasattr(registry, "get_deserializer") | |
| assert hasattr(registry, "has_handler") | |
| # Check methods are callable | |
| assert callable(registry.register) | |
| assert callable(registry.get_serializer) | |
| assert callable(registry.get_deserializer) | |
| assert callable(registry.has_handler) | |
| def test_register_signature(self): | |
| """register() accepts type_name, serializer, and optional deserializer.""" | |
| registry = SerializerRegistry.get_instance() | |
| registry.clear() | |
| # With deserializer | |
| registry.register("Type1", lambda x: x, lambda x: x) | |
| # Without deserializer | |
| registry.register("Type2", lambda x: x) | |
| assert registry.has_handler("Type1") | |
| assert registry.has_handler("Type2") | |
| from pyisolate.interfaces import SerializerRegistryProtocol | |
| class TestSerializerProtocolCompliance: | |
| """Tests for SerializerRegistryProtocol compliance.""" | |
| def test_registry_matches_protocol(self): | |
| """SerializerRegistry implements SerializerRegistryProtocol.""" | |
| registry = SerializerRegistry.get_instance() | |
| assert isinstance(registry, SerializerRegistryProtocol) | |
| def test_register_signature(self): | |
| """register() accepts type_name, serializer, and optional deserializer.""" | |
| registry = SerializerRegistry.get_instance() | |
| registry.clear() | |
| # With deserializer | |
| registry.register("Type1", lambda x: x, lambda x: x) | |
| # Without deserializer | |
| registry.register("Type2", lambda x: x) | |
| assert registry.has_handler("Type1") | |
| assert registry.has_handler("Type2") |
🤖 Prompt for AI Agents
In `@tests/test_serialization_contract.py` around lines 203 - 235, Replace the
verbose attribute/callable checks in
TestSerializerProtocolCompliance.test_registry_matches_protocol with a single
runtime protocol check: create the registry via
SerializerRegistry.get_instance() and assert isinstance(registry,
SerializerRegistryProtocol); reference SerializerRegistry,
SerializerRegistryProtocol, and the test_registry_matches_protocol method to
locate and update the assertion accordingly.
| from pyisolate._internal.serialization_registry import SerializerRegistry | ||
|
|
||
|
|
||
| def test_singleton_identity(): | ||
| r1 = SerializerRegistry.get_instance() | ||
| r2 = SerializerRegistry.get_instance() | ||
| assert r1 is r2 | ||
|
|
||
|
|
||
| def test_register_and_lookup(): | ||
| registry = SerializerRegistry.get_instance() | ||
| registry.clear() | ||
|
|
||
| registry.register("Foo", lambda x: {"v": x}, lambda x: x["v"]) | ||
|
|
||
| assert registry.has_handler("Foo") | ||
| serializer = registry.get_serializer("Foo") | ||
| deserializer = registry.get_deserializer("Foo") | ||
|
|
||
| payload = serializer(123) if serializer else None | ||
| assert payload == {"v": 123} | ||
| assert deserializer(payload) == 123 if deserializer else False | ||
|
|
||
|
|
||
| def test_clear_resets_handlers(): | ||
| registry = SerializerRegistry.get_instance() | ||
| registry.register("Bar", lambda x: x) | ||
| assert registry.has_handler("Bar") | ||
|
|
||
| registry.clear() | ||
| assert not registry.has_handler("Bar") |
There was a problem hiding this comment.
Ensure registry state isolation across tests.
Because SerializerRegistry is a singleton, handlers can leak into other tests. Add an autouse fixture to clear before/after each test to avoid order dependence.
🧹 Proposed fix (autouse cleanup)
+import pytest
from pyisolate._internal.serialization_registry import SerializerRegistry
+@pytest.fixture(autouse=True)
+def _clear_registry():
+ registry = SerializerRegistry.get_instance()
+ registry.clear()
+ yield
+ registry.clear()
+
def test_singleton_identity():
r1 = SerializerRegistry.get_instance()
r2 = SerializerRegistry.get_instance()
assert r1 is r2
def test_register_and_lookup():
registry = SerializerRegistry.get_instance()
- registry.clear()
-
registry.register("Foo", lambda x: {"v": x}, lambda x: x["v"])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from pyisolate._internal.serialization_registry import SerializerRegistry | |
| def test_singleton_identity(): | |
| r1 = SerializerRegistry.get_instance() | |
| r2 = SerializerRegistry.get_instance() | |
| assert r1 is r2 | |
| def test_register_and_lookup(): | |
| registry = SerializerRegistry.get_instance() | |
| registry.clear() | |
| registry.register("Foo", lambda x: {"v": x}, lambda x: x["v"]) | |
| assert registry.has_handler("Foo") | |
| serializer = registry.get_serializer("Foo") | |
| deserializer = registry.get_deserializer("Foo") | |
| payload = serializer(123) if serializer else None | |
| assert payload == {"v": 123} | |
| assert deserializer(payload) == 123 if deserializer else False | |
| def test_clear_resets_handlers(): | |
| registry = SerializerRegistry.get_instance() | |
| registry.register("Bar", lambda x: x) | |
| assert registry.has_handler("Bar") | |
| registry.clear() | |
| assert not registry.has_handler("Bar") | |
| import pytest | |
| from pyisolate._internal.serialization_registry import SerializerRegistry | |
| `@pytest.fixture`(autouse=True) | |
| def _clear_registry(): | |
| registry = SerializerRegistry.get_instance() | |
| registry.clear() | |
| yield | |
| registry.clear() | |
| def test_singleton_identity(): | |
| r1 = SerializerRegistry.get_instance() | |
| r2 = SerializerRegistry.get_instance() | |
| assert r1 is r2 | |
| def test_register_and_lookup(): | |
| registry = SerializerRegistry.get_instance() | |
| registry.register("Foo", lambda x: {"v": x}, lambda x: x["v"]) | |
| assert registry.has_handler("Foo") | |
| serializer = registry.get_serializer("Foo") | |
| deserializer = registry.get_deserializer("Foo") | |
| payload = serializer(123) if serializer else None | |
| assert payload == {"v": 123} | |
| assert deserializer(payload) == 123 if deserializer else False | |
| def test_clear_resets_handlers(): | |
| registry = SerializerRegistry.get_instance() | |
| registry.register("Bar", lambda x: x) | |
| assert registry.has_handler("Bar") | |
| registry.clear() | |
| assert not registry.has_handler("Bar") |
🤖 Prompt for AI Agents
In `@tests/test_serialization_registry.py` around lines 1 - 31, Add an autouse
pytest fixture in tests/test_serialization_registry.py that calls
SerializerRegistry.get_instance().clear() before (and optionally after) each
test to ensure singleton state isolation; reference the SerializerRegistry class
and its get_instance() and clear() methods so the fixture obtains the singleton
and clears registered handlers to avoid test order leakage.
| def test_get_torch_ecosystem_packages_includes_distributions(monkeypatch): | ||
| def fake_distributions(): | ||
| meta = SimpleNamespace(metadata={"Name": "nvidia-cublas"}) | ||
| meta2 = SimpleNamespace(metadata={"Name": "torch-hub"}) | ||
| return [meta, meta2] | ||
|
|
||
| torch_utils.get_torch_ecosystem_packages.cache_clear() | ||
| monkeypatch.setattr(torch_utils.importlib_metadata, "distributions", fake_distributions) | ||
| pkgs = torch_utils.get_torch_ecosystem_packages() | ||
| assert "nvidia-cublas" in pkgs | ||
| assert "torch-hub" in pkgs |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider clearing cache in teardown to prevent test pollution.
The test correctly clears cache before the test, but if get_torch_ecosystem_packages is decorated with @lru_cache, the mocked result may remain cached after the test completes (monkeypatch restores the original function but the cached value persists). Consider adding a finally block or fixture-level teardown to clear the cache.
Suggested improvement
def test_get_torch_ecosystem_packages_includes_distributions(monkeypatch):
def fake_distributions():
meta = SimpleNamespace(metadata={"Name": "nvidia-cublas"})
meta2 = SimpleNamespace(metadata={"Name": "torch-hub"})
return [meta, meta2]
torch_utils.get_torch_ecosystem_packages.cache_clear()
monkeypatch.setattr(torch_utils.importlib_metadata, "distributions", fake_distributions)
- pkgs = torch_utils.get_torch_ecosystem_packages()
- assert "nvidia-cublas" in pkgs
- assert "torch-hub" in pkgs
+ try:
+ pkgs = torch_utils.get_torch_ecosystem_packages()
+ assert "nvidia-cublas" in pkgs
+ assert "torch-hub" in pkgs
+ finally:
+ torch_utils.get_torch_ecosystem_packages.cache_clear()🤖 Prompt for AI Agents
In `@tests/test_torch_utils_additional.py` around lines 6 - 16, The test currently
calls torch_utils.get_torch_ecosystem_packages.cache_clear() only at the start,
which can leave the lru_cache populated after the test and pollute other tests;
ensure the cache is cleared in teardown by invoking
torch_utils.get_torch_ecosystem_packages.cache_clear() in a finally block or
test fixture teardown so the cached value from the monkeypatched
importlib_metadata.distributions does not persist after the test finishes.
Drop Python 3.9, add bubblewrap to workflows, apply ruff formatting. Was running `ruff` and not the additional `ruff-format`. Installed pre-commit hooks to prevent any future non-compliant commits.
There was a problem hiding this comment.
Actionable comments posted: 36
🤖 Fix all issues with AI agents
In @.github/workflows/windows.yml:
- Line 17: The README files still state Python "3.9+" while the CI matrix in
.github/workflows/windows.yml now tests Python versions ['3.10','3.11','3.12'];
update README.md and README_COMFYUI.md to change any references of "Python 3.9+"
(or similar phrasing) to "Python 3.10+" (or the correct minimum) so docs match
the workflow, or alternatively restore '3.9' into the python-version list in
.github/workflows/windows.yml if 3.9 should remain supported.
In `@benchmarks/benchmark_harness.py`:
- Around line 69-98: The load_extensions method is a no-op; replace the
placeholder with real loading: for each cfg in extension_configs build an
ExtensionConfig (use name, module_path = str(self.test_root / "extensions" /
name), and deps = cfg.get("deps", [])) and use self.manager (ExtensionManager
created above) to load each extension (e.g., call the manager's
load/registration method for an ExtensionConfig) and append the returned
extension instances to loaded_extensions; if the ExtensionManager API doesn't
support loading by ExtensionConfig, raise NotImplementedError with a clear
message referencing load_extensions and ExtensionManager so callers know to
update benchmark.py accordingly.
In `@benchmarks/benchmark.py`:
- Around line 84-88: The DatabaseSingleton child stub's __init__ is missing a
call to its parent initializer which can break ProxiedSingleton setup; update
DatabaseSingleton.__init__ to call super().__init__() (e.g., begin the method
with super().__init__()), leaving the existing self._db initialization and the
async stubs set_value/get_value unchanged so singleton behavior is consistent
with ProxiedSingleton.
- Around line 39-48: DatabaseSingleton's __init__ doesn't call
ProxiedSingleton.__init__, so the base-class singleton setup isn't run; update
DatabaseSingleton.__init__ to call super().__init__() (typically as the first
statement) before initializing self._db to ensure ProxiedSingleton's
initialization logic executes.
- Around line 300-301: The closure `func` captures the loop variable `data_val`
by reference so every created closure ends up using the final loop value; fix
this by binding the current `data_val` into the closure signature (or use a
small factory) so each `func` captures its own value before the loop
advances—e.g. make `func` accept a default parameter set to the current
`data_val` and call `ext.do_stuff` with that parameter (target symbols: func,
data_val, ext.do_stuff).
- Around line 341-349: The main() function currently calls
asyncio.run(run_benchmarks(...)) but discards the integer return code; capture
the return value of run_benchmarks via result = asyncio.run(run_benchmarks(...))
and have main() return that integer so callers can act on it, and update the
module entry point to call sys.exit(main()) (use import sys) instead of calling
main() directly so the process exit code reflects run_benchmarks' result;
reference the asyncio.run and run_benchmarks symbols and the module
entrypoint/if __name__ == "__main__" block when making the change.
In `@pyisolate/_internal/client.py`:
- Around line 52-135: The module name derivation in async_entrypoint uses
os.path.basename(module_path) which yields "" for trailing-slash paths;
normalize module_path with os.path.normpath before calling basename (use
module_path = os.path.normpath(module_path) or a local normalized variable) and
then guard against an empty sys_module_name (raise a ValueError with context or
fallback to a safe name) before creating module_spec and importing; update
references around sys_module_name and module_spec creation accordingly.
In `@pyisolate/_internal/environment.py`:
- Around line 83-97: The environment contextmanager currently converts None into
the string "None" when setting os.environ; update environment so that for each
key in env_vars, if the provided value is None you remove the key from
os.environ (os.environ.pop(key, None)) instead of os.environ[key] = str(value),
otherwise set os.environ[key] = str(value); keep saving original values in
original as-is so the finally block can restore or remove using the existing
logic; reference symbols: environment, env_vars, original, os.environ.
In `@pyisolate/_internal/host.py`:
- Around line 464-466: join() currently calls self.proc.join() but self.proc is
a subprocess.Popen created in _launch_with_uds(), so replace the use of the
multiprocessing API with the subprocess API: call self.proc.wait() (and
optionally capture/handle the return code) instead of self.proc.join() to block
until the child process exits; update the join method to use wait() on self.proc
and ensure any callers expecting a return value are adjusted if needed.
- Around line 147-208: In _initialize_process, the unconditional import torch
and call to torch.multiprocessing.set_sharing_strategy will crash when torch is
not installed; only import and call these when the config requests share_torch.
Change the code around the import and set_sharing_strategy (references: the
import torch line and torch.multiprocessing.set_sharing_strategy("file_system"))
to: if self.config.get("share_torch", False): attempt to import torch and call
set_sharing_strategy, and if import fails raise a clear RuntimeError; otherwise
skip importing torch and skip calling set_sharing_strategy.
- Around line 289-463: The code uses tempfile.mktemp() to generate uds_path in
_launch_with_uds which is unsafe; replace it with a race-free approach by
creating a private socket directory (use ensure_ipc_socket_dir() or create one
with tempfile.mkdtemp(prefix="ext_", suffix="_sock")), set the directory
permissions to 0o700, then construct uds_path inside that directory (e.g., join
dir + unique filename) before binding listener_sock; ensure any temporary files
are removed where appropriate and that listener_sock.bind(uds_path) uses the
filesystem path owned and accessible only by the process to avoid TOCTOU races
(refer to symbols: _launch_with_uds, run_dir, uds_path, listener_sock,
ensure_ipc_socket_dir).
In `@pyisolate/_internal/model_serialization.py`:
- Around line 23-62: The module-level _cuda_ipc_enabled should not be read once
at import time; update serialize_for_isolation to compute the CUDA IPC flag at
call time (e.g., evaluate sys.platform == "linux" and
os.environ.get("PYISOLATE_ENABLE_CUDA_IPC") == "1" inside the function) instead
of using the module-level _cuda_ipc_enabled constant, and then use that local
flag when deciding to return the CUDA tensor vs .cpu(); reference the
serialize_for_isolation function and the _cuda_ipc_enabled symbol so you replace
the import-time behavior with a per-call evaluation.
In `@pyisolate/_internal/rpc_protocol.py`:
- Around line 93-110: register_class currently calls cls.__init__ with no args
which raises a raw TypeError if the class requires ctor parameters; before
instantiating the local instance in register_class, inspect the __init__
signature (e.g., via inspect.signature(cls.__init__)) to verify it accepts only
"self" or has defaults for all other parameters, and if not raise a clear
ValueError mentioning the class name (cls.__name__) and that register_class
requires a no-arg constructor (or suggest the class provide defaults/alternate
registration); alternatively wrap the object.__new__ + cls.__init__ call in a
try/except TypeError and re-raise a new, descriptive error that references
register_class, _local_implementations and _local_methods so the caller can act
on ProxiedSingletons that need parameters.
- Around line 641-648: Replace the assert in inject_instance with an explicit
exception: in SingletonMetaclass.inject_instance(cls: type[T], instance: Any)
remove the assert and instead check membership of cls in
SingletonMetaclass._instances and raise a RuntimeError with the same message
when a singleton already exists; update any tests (e.g.,
test_inject_instance_after_instantiation_raises and any others asserting
AssertionError) to expect RuntimeError instead of AssertionError so tests
reflect the new behavior.
- Around line 134-157: The constructor's unprotected call to
asyncio.get_event_loop() can raise RuntimeError in threads; wrap the
get_event_loop() usage in AsyncRPC.__init__ in a try/except catching
RuntimeError and fall back to creating or assigning a safe loop (e.g.,
asyncio.new_event_loop() or None) while holding self._loop_lock when updating
self.default_loop; apply the same pattern in update_event_loop() and any
callback-handling sites that call asyncio.get_event_loop() so they catch
RuntimeError and update self.default_loop under self._loop_lock instead of
assuming a loop exists.
- Around line 265-288: stop() should be made idempotent by checking
blocking_future is not None and not done before calling set_result, and
shutdown() should both safely signal the event loop (as it already does) and
enqueue the None sentinel to the outbox so the _send_thread can exit;
specifically, update stop() to guard set_result with a done() check on
blocking_future, and update shutdown() to (after getting the valid loop) call
loop.call_soon_threadsafe(self.blocking_future.set_result, None) only if
blocking_future exists and not done, and also call
loop.call_soon_threadsafe(self.outbox.put_nowait, None) (guarded by existence of
self.outbox) inside the same try/except so the send thread receives the sentinel
and terminates, ensuring run_until_stopped() and _send_thread both stop cleanly
and no InvalidStateError is raised when stop()/shutdown() are called multiple
times.
In `@pyisolate/_internal/rpc_serialization.py`:
- Around line 203-205: The current debug_all_messages initialization uses
bool(os.environ.get("PYISOLATE_DEBUG_RPC")) which treats any non-empty string
(e.g., "0" or "false") as true; update the code that sets debug_all_messages
(and thus _debug_rpc) to parse PYISOLATE_DEBUG_RPC explicitly by reading
os.environ.get("PYISOLATE_DEBUG_RPC") into a variable, normalizing via
.strip().lower(), and checking membership against an explicit set of truthy
values like {"1","true","yes","on"} (and treat other values as false) so that
only intended values enable RPC debug logging.
In `@pyisolate/_internal/rpc_transports.py`:
- Around line 79-98: ConnectionTransport.recv is currently not locked, violating
the RPCTransport "thread-safe send/recv" contract; add a dedicated receive lock
(e.g. self._recv_lock = threading.Lock() in ConnectionTransport.__init__) and
wrap the body of ConnectionTransport.recv() with "with self._recv_lock:" to
protect concurrent reads (keep send() using self._lock as-is to avoid changing
send semantics).
- Around line 296-425: The __pyisolate_object__ branch currently imports
arbitrary modules and instantiates classes from untrusted JSON, enabling RCE;
change it to only use registered deserializers from SerializerRegistry (like the
__type__ path) and never perform import/getattr/cls() or cls.__new__ on
untrusted data. Specifically, in the __pyisolate_object__ handler (the block
checking dct.get("__pyisolate_object__")), look up a deserializer in
SerializerRegistry.get_instance().get_deserializer using a stable key (e.g.,
type or module+type) and call it if present; if no deserializer exists, return a
SimpleNamespace fallback with __pyisolate_type__/__pyisolate_module__ metadata
and do not attempt any dynamic imports or constructor calls. Ensure
CallableProxy, bytes, enum, TensorRef, and exception handling remain unchanged.
In `@pyisolate/_internal/serialization_registry.py`:
- Around line 40-45: When re-registering a serializer for an existing type_name,
the old deserializer can remain if the new registration omits one; update the
registration logic (the code block that assigns self._serializers[type_name] =
serializer and conditionally sets self._deserializers[type_name] = deserializer)
to remove any previously registered deserializer when no new deserializer is
provided (e.g., pop self._deserializers[type_name] if present when deserializer
is falsy) so serializers and deserializers stay consistent.
- Around line 26-31: The get_instance method on SerializerRegistry uses a
non-thread-safe check-then-assign on the class attribute _instance; make
instantiation thread-safe by adding a class-level threading.Lock (e.g.,
_instance_lock = threading.Lock()) and wrapping the creation in a double-checked
lock inside get_instance: first check if cls._instance is None, then with
cls._instance_lock: check again and assign cls._instance = cls() if still None;
alternatively, if you want to keep it simple, add a clear docstring on
SerializerRegistry.get_instance stating that it is only safe when called during
single-threaded bootstrap and not thread-safe for concurrent calls.
In `@pyisolate/_internal/uds_client.py`:
- Around line 126-243: The code unconditionally imports torch (around the call
to torch.multiprocessing.set_sharing_strategy and earlier) which will crash in
environments without torch; wrap the torch import and set_sharing_strategy call
in a try/except ImportError so torch is only required when needed, and only
raise if config.get("share_torch", False) is true but torch is missing;
specifically, keep register_tensor_serializer(SerializerRegistry.get_instance())
as-is (it doesn't need torch), change the unconditional import torch /
set_sharing_strategy block to a guarded import used only when share_torch is
required, and ensure the context assignment that uses torch.inference_mode()
similarly imports torch inside the if and raises a clear error if share_torch is
true but torch is unavailable.
In `@pyisolate/interfaces.py`:
- Around line 63-64: Reword the docstring for the Protocol method
handle_api_registration to clarify that the hook is required by the Protocol but
implementations may perform no action (i.e., a no-op) rather than implying the
method itself is optional; update the docstring on the handle_api_registration
signature in pyisolate/interfaces.py to state it is a required hook that
implementations can leave as a no-op for APIs that need no post-registration
setup.
In `@README_COMFYUI.md`:
- Around line 141-143: Replace the misspelled manifest filename "pysiolate.yaml"
with the correct "pyisolate.yaml" in the README text where it appears (the line
describing custom_node's pysiolate.yaml's dependencies), updating the string
literal so all references to the manifest filename use "pyisolate.yaml".
In `@README.md`:
- Around line 11-13: Update the README entry that currently lists "Python 3.9+"
to match pyproject.toml by changing it to "Python 3.10+"; locate the README.md
bullet (the list item containing "Python 3.9+") and replace the version string
so documentation and the package requirement (pyproject.toml's python
requirement) are consistent.
In `@tests/harness/host.py`:
- Around line 72-84: Save the original TMPDIR in __init__ (e.g.,
self._orig_tmpdir = os.environ.get("TMPDIR")), continue to set
os.environ["TMPDIR"] = str(self.shared_tmp) as now, and in the cleanup method
(the class cleanup/teardown routine that removes self.temp_dir / shared_tmp)
always restore os.environ["TMPDIR"] back to self._orig_tmpdir (using a finally
block to ensure restoration even if deletion fails). Modify cleanup to not
swallow exceptions: collect any exceptions during removal of self.shared_tmp or
self.temp_dir and either re-raise a combined exception or log them via the
existing logger so failures are visible (ensure TMPDIR restoration runs
regardless). Reference: __init__, self.shared_tmp, os.environ["TMPDIR"], and the
class cleanup method.
In `@tests/integration_v2/conftest.py`:
- Around line 6-14: The fixture creates ReferenceHost() then calls host.setup()
outside the try so if setup raises cleanup never runs; change reference_host to
instantiate host before the try, move host.setup() inside the try block (so the
try begins immediately after host = ReferenceHost()), then yield host, and keep
finally calling await host.cleanup() so cleanup always executes even when setup
fails; reference symbols: fixture reference_host and methods ReferenceHost.setup
and ReferenceHost.cleanup.
In `@tests/test_adapter_contract.py`:
- Around line 68-109: Tests share the global SerializerRegistry singleton which
can race in parallel runs; add a pytest fixture (e.g., autouse fixture in
conftest or at top of tests/test_adapter_contract.py) that obtains
SerializerRegistry.get_instance(), calls .clear() before yielding and again
after the test to ensure isolation, then ensure tests (or file) use that fixture
(autouse=True) so register_serializers, SerializerRegistry.get_instance(),
get_serializer and related calls run against a cleared registry for each test.
In `@tests/test_bootstrap_additional.py`:
- Around line 9-20: The test mutates sys.path but only restores it after
assertions, so failures can leave sys.path modified; wrap the call to
bootstrap._apply_sys_path and the subsequent assertions in a try/finally that
assigns sys.path[:] = original in the finally block (use the existing original =
list(sys.path) as the saved snapshot) to guarantee restoration; update the
test_apply_sys_path_merges_and_dedup function accordingly so any exception still
triggers the cleanup.
In `@tests/test_extension_lifecycle.py`:
- Around line 171-202: The tests test_extension_lifecycle_phases and
test_extension_stop_is_idempotent only document expected contracts and don’t
exercise behavior; replace or augment them with real integration assertions:
write a test that constructs an actual Extension/ExtensionConfig (or use
ExtensionManager), triggers venv creation/start (call whatever method creates
the environment and start/launch, e.g., Extension.start or Manager.launch),
verify the process is spawned and sys.path/RPC channel is set (check
process.is_alive or manager.is_running and a simple RPC call returns expected
result), then stop the extension and assert resources cleaned up; also implement
a test that calls Extension.stop (or Manager.stop) twice and asserts no
exceptions and that state remains stopped (idempotency). Use the existing test
names test_extension_lifecycle_phases and test_extension_stop_is_idempotent to
replace the doc-only bodies and reference Extension, ExtensionConfig,
start/launch, execute_rpc (or equivalent methods) and stop for locating code.
- Around line 116-117: Open calls that read the YAML manifest use plain
open(...) which can mis-handle non-ASCII; change them to open(manifest_path,
encoding="utf-8") before passing to yaml.safe_load. Update the occurrence that
creates `manifest` (the with open(manifest_path) as f: manifest =
yaml.safe_load(f)) and the other similar open at the later occurrence mentioned
in the review so both use encoding="utf-8".
In `@tests/test_host_internal_ext.py`:
- Around line 219-226: The test hardcodes POSIX venv paths
("lib/python3.12/site-packages" and "bin/python") which break on other Python
versions and Windows; update the setup to derive paths from the running platform
and Python version by computing py_ver =
f"python{sys.version_info.major}.{sys.version_info.minor}" and branching on
platform.system(): on Windows set site_packages = venv_path / "Lib" /
"site-packages" and python_exe = venv_path / "Scripts" / "python.exe", otherwise
set site_packages = venv_path / "lib" / py_ver / "site-packages" and python_exe
= venv_path / "bin" / "python" (use these variables in place of the hardcoded
ones for the other occurrences at lines noted); reference variables venv_path,
site_packages, python_exe when making the change.
- Around line 64-80: The DummyExtension __init__ currently hard-codes
module_path as "/tmp/mod.py"; change it to derive the path from the tmp_path
fixture instead (e.g., use tmp_path / "mod.py" and convert to str) so
module_path is platform-independent; update the module_path argument in the
super().__init__ call within DummyExtension.__init__ to use the computed
tmp_path-based value and ensure any callers still pass tmp_path as before.
- Around line 88-90: The override of _install_dependencies in
tests/test_host_internal_ext.py is a no-op and prevents
test_install_dependencies_* from exercising real logic; change the method to
delegate to the base implementation (call super()._install_dependencies() or the
appropriate base class method) instead of immediately returning so the inherited
installation logic runs during tests.
In `@tests/test_host_public.py`:
- Around line 47-56: The test uses a hard-coded platform-specific module path
("/tmp/mod.py") in base_config; change base_config to compute "module_path" from
the tmp_path fixture (e.g., create a file path under tmp_path such as
tmp_path/"mod.py" and set "module_path" to its string) so tests are portable
across platforms and CI; update the base_config function to derive and return
"module_path" from the tmp_path argument instead of the literal "/tmp/mod.py".
In `@tests/test_shared_additional.py`:
- Around line 56-86: The test creates and sets a new event loop
(asyncio.new_event_loop() + asyncio.set_event_loop(loop)) but doesn't restore
the global event loop after closing it; update
test_async_rpc_send_thread_sets_exception_on_send_failure to call
asyncio.set_event_loop(None) (or reset to the previous loop) immediately after
loop.close() so a closed loop is not left as the current event loop, and apply
the same change to the other similar test (lines 88-118) that follows the same
pattern; reference the test function name and ensure the cleanup happens at the
end of each test after loop.close().
♻️ Duplicate comments (33)
tests/test_torch_utils_additional.py (2)
6-16: Add cache cleanup in teardown to prevent test pollution.The cache is cleared before the test but not after. If another test runs after this without clearing the cache, it may receive stale mocked results.
19-26: Test logic is correct; apply the same cache cleanup pattern here.The test correctly validates that the base package set is returned when metadata enumeration fails. Apply the same
finallyblock cache cleanup as suggested for the first test.Suggested improvement
def test_get_torch_ecosystem_packages_handles_exception(monkeypatch): def bad_distributions(): raise RuntimeError("boom") torch_utils.get_torch_ecosystem_packages.cache_clear() monkeypatch.setattr(torch_utils.importlib_metadata, "distributions", bad_distributions) - pkgs = torch_utils.get_torch_ecosystem_packages() - assert "torch" in pkgs # base set still returned + try: + pkgs = torch_utils.get_torch_ecosystem_packages() + assert "torch" in pkgs # base set still returned + finally: + torch_utils.get_torch_ecosystem_packages.cache_clear()tests/conftest.py (1)
34-47: Remove redundant manual restore afteryield.
monkeypatchalready reverts attributes on teardown, so the explicit reset is unnecessary.♻️ Simplify fixture teardown
monkeypatch.setattr(host_internal.Extension, "_Extension__launch", dummy_launch) yield - monkeypatch.setattr(host_internal.Extension, "_Extension__launch", original_launch)README_COMFYUI.md (2)
38-41: Fix install command:git install .is invalid.This should be
pip install .(oruv pip install .) to actually install the package.🛠️ Proposed fix
git clone https://github.com/pollockjj/pyisolate cd pyisolate -git install . +pip install .
202-202: Broken internal anchors need valid targets.The links
#appendix-supported-apis,#advanced-web-routes, and#what-doesnt-work-yetdon’t resolve. Add the missing sections or update the anchors to existing headings (e.g.,#what-doesnt-work).🛠️ Minimal fix for the known anchor
-**Cause:** Likely using a pattern that doesn't work with isolation (see [What Doesn't Work](`#what-doesnt-work-yet`)). +**Cause:** Likely using a pattern that doesn't work with isolation (see [What Doesn't Work](`#what-doesnt-work`)).Also applies to: 244-244, 306-306
tests/test_remote_handle.py (1)
6-31: Add pickle round-trip coverage forRemoteObjectHandle.Since
__module__is explicitly set for pickling, a simple pickle/unpickle test would lock in that contract.🧪 Suggested test
+import pickle + class TestRemoteObjectHandleContract: @@ def test_remote_handle_repr(self): """RemoteObjectHandle has informative repr.""" handle = RemoteObjectHandle("my_object", "MyClass") @@ assert "MyClass" in repr_str + + def test_remote_handle_pickle_roundtrip(self): + """RemoteObjectHandle should survive pickle round-trip.""" + handle = RemoteObjectHandle("obj_456", "SomeType") + restored = pickle.loads(pickle.dumps(handle)) + assert restored.object_id == handle.object_id + assert restored.type_name == handle.type_namepyisolate/_internal/socket_utils.py (1)
34-42: Harden permissions when IPC dir already exists.
mkdir(..., mode=0o700)won’t change perms on an existing directory; a prior 0o755 dir can expose IPC sockets. Ensure perms are tightened after creation.🔒 Proposed fix
def ensure_ipc_socket_dir() -> Path: """Create and return the IPC socket directory with appropriate permissions.""" socket_dir = get_ipc_socket_dir() if os.name == "nt": # Windows: mkdir without mode (permissions handled by OS) socket_dir.mkdir(parents=True, exist_ok=True) else: # Linux/Unix: Secure permissions socket_dir.mkdir(parents=True, exist_ok=True, mode=0o700) + current_mode = socket_dir.stat().st_mode & 0o777 + if current_mode != 0o700: + socket_dir.chmod(0o700) return socket_dirREADME.md (1)
79-80: Fix documentation link formatting and path typo.Use a proper markdown link and correct
mysolate/→pyisolate/.🛠️ Proposed fix
-- Project site: https://comfy-org.github.io/pyisolate/ -- Walkthroughs & architecture notes: see `mysolate/HELLO_WORLD.md` and `mysolate/GETTING_STARTED.md` +- [Project site](https://comfy-org.github.io/pyisolate/) +- Walkthroughs & architecture notes: see `pyisolate/HELLO_WORLD.md` and `pyisolate/GETTING_STARTED.md`pyisolate/_internal/sandbox_detect.py (1)
136-164: Make the bwrap smoke test tolerant of non‑standard filesystem layouts.Line 151‑163 hard‑codes
/usr,/bin,/lib,/lib64, and/usr/bin/true; this can produce false negatives on merged‑/usror minimal images. Consider binding only existing paths and choosing an availabletruebinary.🔧 Suggested fix
-import logging -import shutil +import logging +import os +import shutil @@ - result = subprocess.run( # noqa: S603 - [ - bwrap_path, - "--unshare-user-try", - "--dev", - "/dev", - "--proc", - "/proc", - "--ro-bind", - "/usr", - "/usr", - "--ro-bind", - "/bin", - "/bin", - "--ro-bind", - "/lib", - "/lib", - "--ro-bind", - "/lib64", - "/lib64", - "/usr/bin/true", - ], + bind_args: list[str] = [] + for path in ("/usr", "/bin", "/lib", "/lib64"): + if os.path.exists(path): + bind_args.extend(["--ro-bind", path, path]) + + true_binary = shutil.which("true") or "/usr/bin/true" + + result = subprocess.run( # noqa: S603 + [ + bwrap_path, + "--unshare-user-try", + "--dev", + "/dev", + "--proc", + "/proc", + *bind_args, + true_binary, + ], capture_output=True, timeout=10, )pyisolate/_internal/sandbox.py (2)
148-155: Fix inconsistent/usrCUDA‑path skip logic.Line 151‑154 contradicts the comment;
/usr/localis already under/usr, so any/usr/*path is already covered by the/usrbind.🔧 Suggested fix
- # Skip if already covered by /usr bind - if cuda_path.startswith("/usr/") and not cuda_path.startswith("/usr/local/"): - # Actually /usr/local is in /usr. - # Safe heuristic: if it starts with /usr, we assume covered. - continue + # Skip paths already covered by the /usr bind (including /usr/local) + if cuda_path.startswith("/usr/"): + continue
260-270: Remove duplicate CUDA env var entry.
PYISOLATE_ENABLE_CUDA_IPCappears twice.🔧 Suggested fix
cuda_env_vars = [ "CUDA_HOME", "CUDA_PATH", "CUDA_VISIBLE_DEVICES", "NVIDIA_VISIBLE_DEVICES", "LD_LIBRARY_PATH", "PYTORCH_CUDA_ALLOC_CONF", "TORCH_CUDA_ARCH_LIST", "PYISOLATE_ENABLE_CUDA_IPC", - "PYISOLATE_ENABLE_CUDA_IPC", ]tests/harness/test_package/__init__.py (1)
29-33: Avoidsys.modulesas a test sentinel.Line 32 mutates
sys.modules, which can leak across tests. Prefer a module‑level flag (or a registry) and update any readers accordingly.🔧 Suggested fix
-import sys from typing import Any @@ logger = logging.getLogger(__name__) +_test_ext_initialized = False @@ - sys.modules["_test_ext_initialized"] = True # type: ignore + global _test_ext_initialized + _test_ext_initialized = TruePlease verify/update any call sites that check this marker:
#!/bin/bash rg -n '_test_ext_initialized'tests/test_serialization_registry.py (1)
1-31: Add autouse cleanup to isolate singleton state across tests.The registry is a singleton and can leak handlers between tests. Please add an autouse fixture to clear before/after tests.
🧹 Proposed fix (autouse cleanup)
+import pytest from pyisolate._internal.serialization_registry import SerializerRegistry +@pytest.fixture(autouse=True) +def _clear_registry(): + registry = SerializerRegistry.get_instance() + registry.clear() + yield + registry.clear() + def test_singleton_identity(): r1 = SerializerRegistry.get_instance() r2 = SerializerRegistry.get_instance() assert r1 is r2tests/test_config_validation.py (1)
35-210: Add requiredsandboxandenvfields to all ExtensionConfig dicts.
ExtensionConfigisTypedDictwithtotal=True, so missingsandboxandenvwill break type checks. Please add these keys to every config in this file.🛠️ Example fix (apply to all ExtensionConfig dicts)
config: ExtensionConfig = { "name": "", "module_path": "/path/to/ext", "isolated": True, "dependencies": [], "apis": [], "share_torch": False, "share_cuda_ipc": False, + "sandbox": {}, + "env": {}, }tests/integration_v2/debug_rpc.py (1)
6-23: Avoid module-level logging configuration in tests.Module-import logging config can affect unrelated tests in the same session. Scope it to the test body instead.
♻️ Suggested adjustment
-# Configure logging to see what's happening -logging.basicConfig(level=logging.DEBUG, stream=sys.stderr) -logging.getLogger("pyisolate").setLevel(logging.DEBUG) - - `@pytest.mark.asyncio` async def test_debug_ping(reference_host): print("\n--- Starting Debug Ping ---") + # Configure logging only for this test + logging.basicConfig(level=logging.DEBUG, stream=sys.stderr) + logging.getLogger("pyisolate").setLevel(logging.DEBUG) ext = reference_host.load_test_extension("debug_ping", isolated=True)tests/test_host_integration.py (1)
4-52: Replace hard-coded/tmppaths withtmp_path(blocking per PR objectives).The tests and FakeAdapter bake in
/tmp/ComfyUI, which breaks cross-platform runs and conflicts with the PR’s blocking items. Parameterize the root and usetmp_path.🛠️ Proposed fix
+import pytest + from pyisolate._internal import host class FakeAdapter: identifier = "fake" - def __init__(self, preferred_root="/tmp/ComfyUI"): + def __init__(self, preferred_root: str): self.preferred_root = preferred_root - def get_path_config(self, module_path): + def get_path_config(self, module_path: str) -> dict[str, object]: return { "preferred_root": self.preferred_root, "additional_paths": [f"{self.preferred_root}/custom_nodes"], } @@ -def test_build_extension_snapshot_includes_adapter(monkeypatch): +def test_build_extension_snapshot_includes_adapter(monkeypatch, tmp_path): from pyisolate._internal.adapter_registry import AdapterRegistry - monkeypatch.setattr(AdapterRegistry, "get", lambda: FakeAdapter()) + fake_root = tmp_path / "ComfyUI" + fake_root.mkdir() + monkeypatch.setattr(AdapterRegistry, "get", lambda: FakeAdapter(str(fake_root))) - snapshot = host.build_extension_snapshot("/tmp/ComfyUI/custom_nodes/demo") + module_path = fake_root / "custom_nodes" / "demo" + module_path.mkdir(parents=True) + snapshot = host.build_extension_snapshot(str(module_path)) @@ - assert snapshot.get("context_data", {}).get("module_path") == "/tmp/ComfyUI/custom_nodes/demo" + assert snapshot.get("context_data", {}).get("module_path") == str(module_path) -def test_build_extension_snapshot_no_adapter(monkeypatch): +def test_build_extension_snapshot_no_adapter(monkeypatch, tmp_path): from pyisolate._internal.adapter_registry import AdapterRegistry monkeypatch.setattr(AdapterRegistry, "get", lambda: None) - snapshot = host.build_extension_snapshot("/tmp/nowhere") + snapshot = host.build_extension_snapshot(str(tmp_path / "nowhere"))tests/test_serialization_contract.py (1)
210-225: Prefer a runtime protocol check over attribute/callable checks.Using
isinstance(registry, SerializerRegistryProtocol)is cleaner and more idiomatic for a@runtime_checkableprotocol.♻️ Suggested refactor
+from pyisolate.interfaces import SerializerRegistryProtocol + def test_registry_matches_protocol(self): """SerializerRegistry implements SerializerRegistryProtocol.""" registry = SerializerRegistry.get_instance() - # Check protocol methods exist - assert hasattr(registry, "register") - assert hasattr(registry, "get_serializer") - assert hasattr(registry, "get_deserializer") - assert hasattr(registry, "has_handler") - - # Check methods are callable - assert callable(registry.register) - assert callable(registry.get_serializer) - assert callable(registry.get_deserializer) - assert callable(registry.has_handler) + assert isinstance(registry, SerializerRegistryProtocol)tests/integration_v2/test_isolation.py (3)
12-21: Use the sensitive file in the isolation assertion or remove it.Right now the temp file is created and cleaned up but never exercised. Either test child read/write access to
sensitive_pathor remove this setup.
22-35: Remove exploratory/debug comments from the test body.These blocks are developer notes and should be replaced with a concise intent comment.
🧹 Cleanup suggestion
- # 3. Attempt to read/write sensitive file from child - # By default, bwrap only binds specific paths. /tmp is usually private or shared? - # In ReferenceHost/pyisolate defaults, /tmp might be bound? - # Let's try to write to a path that is definitely NOT bound, e.g. the test file itself - # if in a private dir? - # Actually, let's try to write to /home/johnj/pyisolate (source code) - assuming default - # doesn't allow write? - # Wait, pyisolate binds 'module_path' and 'venv_path'. - # It binds site-packages. - # It binds /tmp usually? - - # Let's try to write to a new file in /usr (should be read-only or not bound) - # or /etc/passwd (classic test). - - # Test 1: Read /etc/passwd (should fail or be empty if not bound, usually bound RO) - # Test 2: Write to /etc/hosts (should fail) + # Attempt to write to a system path to verify isolation @@ - # Test 3: Write to module path (should be allowed? or RO?) - # PyIsolate binds module path. Read-only? - # pyisolate/_internal/sandbox.py: - # binds module_path as ro-bind usually? I need to check sandbox.py details. - # If I can't check, I'll test it. + # Module path RO is validated in test_module_path_roAlso applies to: 47-51
39-43: Avoid catching bareExceptionin isolation assertions.Catching everything can mask RPC failures or unexpected errors. Please confirm the specific exception type thrown by
proxy.write_fileand catch/assert only the expected error(s).Also applies to: 66-70
tests/test_path_helpers_contract.py (2)
63-65: Simplify host-path membership checks to avoid false positives.Normalize once and assert direct membership for clarity and correctness.
♻️ Proposed simplification
- for path in host: - # Paths may be normalized - assert any(os.path.normpath(path) in os.path.normpath(r) for r in result) + result_normalized = {os.path.normpath(r) for r in result} + for path in host: + assert os.path.normpath(path) in result_normalized, f"{path} not in result"
77-78: Assert extra paths are actually included.The length check can pass even if extra paths are missing.
✅ Stronger assertion
- # Extra paths should appear somewhere - assert len(result) >= len(host) + result_normalized = {os.path.normpath(r) for r in result} + for path in extra: + assert os.path.normpath(path) in result_normalized, f"Extra path {path} not in result"tests/integration_v2/test_tensors.py (1)
1-3: Guardtorchimport to avoid collection failures.
torchis an optional dependency; usepytest.importorskip("torch")so tests skip cleanly when unavailable.✅ Suggested change
-import pytest -import torch +import pytest +torch = pytest.importorskip("torch")tests/integration_v2/test_lifecycle.py (2)
6-10: Fix typo in docstring.
Spelling error in the verification list.✏️ Proposed fix
- 1. Extension can actally accept a connection. + 1. Extension can actually accept a connection.
25-43: Test name/intent conflicts with isolated=True.
Either rename the test or switch to isolated=False so the name matches behavior.🧩 Option: rename to match behavior
-async def test_non_isolated_lifecycle(reference_host): +async def test_isolated_without_torch_sharing(reference_host): """ - Verifies standard mode (host-loaded) works with same API. + Verifies isolated mode works without torch sharing. """tests/test_rpc_shutdown.py (2)
37-65: Remove unused BlockingMockTransport.
It’s not referenced anywhere in this test module.🧹 Proposed cleanup
-class BlockingMockTransport(RPCTransport): - """Transport that allows controlling recv blocking.""" - - def __init__(self): - self.recv_queue = asyncio.Queue() - self.closed = False - - def send(self, obj): - pass - - def recv(self): - # This will be called in a thread - if self.closed: - raise ConnectionError("Closed") - # Block until item available - # Since we can't easily block in a non-async way without - # actual threading primitives, we'll just simulate a quick - # loop check or similar. - # But actually, the RPC implementation calls transport.recv() - # which is synchronous. - import time - - while not self.closed: - time.sleep(0.01) - raise ConnectionError("Closed during block") - - def close(self): - self.closed = True -
110-113: Remove duplicate assertion.
Same assertion appears twice.🧹 Proposed cleanup
- assert "Socket closed" in debug_logs[0].messagepyisolate/path_helpers.py (1)
56-82: Make filtered subdirectories configurable (still hardcoded).
The fixed ComfyUI subdir list couples this helper to one layout and should be parameterized.🔧 Suggested refactor
def build_child_sys_path( host_paths: Sequence[str], extra_paths: Sequence[str], preferred_root: str | None = None, + excluded_subdirs: Sequence[str] | None = None, ) -> list[str]: @@ - if preferred_root: + if preferred_root: + if excluded_subdirs is None: + excluded_subdirs = ("comfy", "app", "comfy_execution", "utils") root_norm = _norm(preferred_root) - code_subdirs = { - os.path.join(root_norm, "comfy"), - os.path.join(root_norm, "app"), - os.path.join(root_norm, "comfy_execution"), - os.path.join(root_norm, "utils"), - } + code_subdirs = {os.path.join(root_norm, subdir) for subdir in excluded_subdirs}pyisolate/config.py (1)
46-53: Markshare_cuda_ipc,sandbox, andenvasNotRequiredfor mypy compliance.Test configurations throughout the codebase (e.g.,
tests/test_extension_lifecycle.py) omit these fields, and runtime code already treats them as optional (using.get()andinchecks). Marking them as required will cause mypy failures.Import
NotRequiredfromtyping_extensions(already used elsewhere for@override) and update the type annotations.🔧 Proposed fix
-from typing import TYPE_CHECKING, Any, TypedDict +from typing import TYPE_CHECKING, Any, TypedDict +from typing_extensions import NotRequiredThen update the fields:
- share_cuda_ipc: bool + share_cuda_ipc: NotRequired[bool] """If True, attempt CUDA IPC-based tensor transport (Linux only, requires ``share_torch``).""" - sandbox: dict[str, Any] + sandbox: NotRequired[SandboxConfig] """Configuration for the sandbox (e.g. writable_paths, network access).""" - env: dict[str, str] + env: NotRequired[dict[str, str]] """Environment variable overrides for the child process."""tests/path_unification/test_path_helpers.py (3)
44-45: Specifyencoding="utf-8"when reading JSON file.For consistency with the implementation (which writes with
encoding="utf-8"), explicitly specify the encoding when reading.🔧 Proposed fix
- with open(output_path) as f: + with open(output_path, encoding="utf-8") as f: loaded = json.load(f)
96-117: Use deterministic test paths instead of environment/home fallbacks.Multiple tests use
os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI")which introduces non-determinism across environments. Since these tests verify path ordering logic (not filesystem operations), use fixed fake paths for clearer, reproducible tests.🔧 Proposed fix for test_inserts_comfy_root_first_when_missing
def test_inserts_comfy_root_first_when_missing(self): """If comfy_root provided and not in host_paths, prepend it.""" host = ["/host/lib1", "/host/lib2"] extras = ["/venv/lib"] - comfy_root = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") + comfy_root = "/fake/comfyui/root" result = build_child_sys_path(host, extras, comfy_root) assert result[0] == comfy_root assert result[1:3] == hostApply similar changes to
test_does_not_duplicate_comfy_root_if_present,test_removes_comfy_subdirectories_when_root_specified, andtest_preserves_venv_site_packages_under_comfy_root.
215-241: Integration test also uses non-deterministic path fallback.Same issue as the unit tests - lines 234 and 238 use the
Path.home() / "ComfyUI"fallback. Consider using a deterministic test path.🔧 Proposed fix
# Build child path + preferred_root = "/fake/comfyui/root" child_path = build_child_sys_path( snapshot["sys_path"], extras, - preferred_root=os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI"), + preferred_root=preferred_root, ) # Verify structure - check that preferred_root is present - preferred = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") - assert preferred in child_path + assert preferred_root in child_pathpyisolate/_internal/environment.py (1)
303-318: Handle missing host torch whenshare_torch=False.
import torchwill raiseImportErrorand produce a noisy stack trace. Consider a clear, user-facing error explaining the requirement.🐛 Suggested error handling
torch_spec: str | None = None if not config["share_torch"]: - import torch + try: + import torch + except ImportError as exc: + raise RuntimeError( + "torch must be installed in the host environment when share_torch is False " + "to compute torch_spec/safe_deps for dependency resolution." + ) from exc
| fail-fast: false | ||
| matrix: | ||
| python-version: ['3.9', '3.10', '3.11', '3.12'] | ||
| python-version: ['3.10', '3.11', '3.12'] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify declared Python support in packaging metadata and docs
rg -n --glob '!**/.venv/**' 'python_requires|Programming Language :: Python :: 3\.9|3\.9' pyproject.toml setup.cfg setup.py README* docs* 2>/dev/null || echo "No matches found or files don't exist"Repository: Comfy-Org/pyisolate
Length of output: 162
Update README files to reflect minimum Python version change.
Line 17 removes Python 3.9 from the CI matrix, but README.md (line 11) and README_COMFYUI.md (line 19) still claim "Python 3.9+" support. Update these files to reflect the new minimum version (3.10) or restore 3.9 to the CI matrix if it should remain supported.
🤖 Prompt for AI Agents
In @.github/workflows/windows.yml at line 17, The README files still state
Python "3.9+" while the CI matrix in .github/workflows/windows.yml now tests
Python versions ['3.10','3.11','3.12']; update README.md and README_COMFYUI.md
to change any references of "Python 3.9+" (or similar phrasing) to "Python
3.10+" (or the correct minimum) so docs match the workflow, or alternatively
restore '3.9' into the python-version list in .github/workflows/windows.yml if
3.9 should remain supported.
| async def load_extensions(self, extension_configs: list[dict], extension_base_cls) -> list: | ||
| """Load extensions defined in configs.""" | ||
| config = ExtensionManagerConfig(venv_root_path=str(self.test_root / "extension-venvs")) | ||
| self.manager = ExtensionManager(extension_base_cls, config) | ||
|
|
||
| loaded_extensions = [] | ||
| for cfg in extension_configs: | ||
| name = cfg["name"] | ||
| # Config might be passed as simple dict | ||
|
|
||
| # Reconstruct dependencies if not passed mostly for existing pattern in benchmark.py | ||
| # But create_extension handles writing to disk. loading needs ExtensionConfig object. | ||
|
|
||
| # This is slightly tricky because creation and loading are split in benchmark.py | ||
| # I'll rely on the caller to pass correct params or infer them? | ||
| # Actually benchmark.py logic: create_extension then load_extensions loop. | ||
|
|
||
| # Since we know the path structure from create_extension: | ||
| module_path = str(self.test_root / "extensions" / name) | ||
|
|
||
| # NOTE: benchmark.py passed deps to create_extension but strangely not to load_extensions | ||
| # We must pass them here to ExtensionConfig. | ||
| # Ideally load_extensions accepts full config objects or we recreate them. | ||
| # I will adapt this to match what benchmark.py expects or refactor benchmark.py to iterate. | ||
|
|
||
| # Simpler approach: Allow caller to just use manager directly if they want, | ||
| # or provide a helper that does what benchmark.py did (but correctly). | ||
| pass | ||
|
|
||
| return loaded_extensions # placeholder, I will implement explicit loading in the script |
There was a problem hiding this comment.
load_extensions is currently a no-op.
It returns an empty list and never constructs/loads ExtensionConfigs, so benchmarks relying on it will silently do nothing. This should either be implemented or raise a clear NotImplementedError.
✅ Proposed implementation
async def load_extensions(self, extension_configs: list[dict], extension_base_cls) -> list:
"""Load extensions defined in configs."""
config = ExtensionManagerConfig(venv_root_path=str(self.test_root / "extension-venvs"))
self.manager = ExtensionManager(extension_base_cls, config)
loaded_extensions = []
for cfg in extension_configs:
name = cfg["name"]
- # Config might be passed as simple dict
-
- # Reconstruct dependencies if not passed mostly for existing pattern in benchmark.py
- # But create_extension handles writing to disk. loading needs ExtensionConfig object.
-
- # This is slightly tricky because creation and loading are split in benchmark.py
- # I'll rely on the caller to pass correct params or infer them?
- # Actually benchmark.py logic: create_extension then load_extensions loop.
-
- # Since we know the path structure from create_extension:
- module_path = str(self.test_root / "extensions" / name)
-
- # NOTE: benchmark.py passed deps to create_extension but strangely not to load_extensions
- # We must pass them here to ExtensionConfig.
- # Ideally load_extensions accepts full config objects or we recreate them.
- # I will adapt this to match what benchmark.py expects or refactor benchmark.py to iterate.
-
- # Simpler approach: Allow caller to just use manager directly if they want,
- # or provide a helper that does what benchmark.py did (but correctly).
- pass
-
- return loaded_extensions # placeholder, I will implement explicit loading in the script
+ module_path = cfg.get("module_path") or str(self.test_root / "extensions" / name)
+ extension_cfg: ExtensionConfig = {
+ "name": name,
+ "module_path": module_path,
+ "isolated": cfg.get("isolated", True),
+ "dependencies": cfg.get("dependencies", []),
+ "apis": cfg.get("apis", []),
+ "share_torch": cfg.get("share_torch", False),
+ "share_cuda_ipc": cfg.get("share_cuda_ipc", False),
+ "sandbox": cfg.get("sandbox", {}),
+ "env": cfg.get("env", {}),
+ }
+ loaded_extensions.append(self.manager.load_extension(extension_cfg))
+
+ return loaded_extensions🤖 Prompt for AI Agents
In `@benchmarks/benchmark_harness.py` around lines 69 - 98, The load_extensions
method is a no-op; replace the placeholder with real loading: for each cfg in
extension_configs build an ExtensionConfig (use name, module_path =
str(self.test_root / "extensions" / name), and deps = cfg.get("deps", [])) and
use self.manager (ExtensionManager created above) to load each extension (e.g.,
call the manager's load/registration method for an ExtensionConfig) and append
the returned extension instances to loaded_extensions; if the ExtensionManager
API doesn't support loading by ExtensionConfig, raise NotImplementedError with a
clear message referencing load_extensions and ExtensionManager so callers know
to update benchmark.py accordingly.
| class DatabaseSingleton(ProxiedSingleton): | ||
| """Simple dictionary-based singleton for testing state.""" | ||
| def __init__(self): | ||
| self._db = {} | ||
|
|
||
| # Override benchmark runner settings for quick mode | ||
| if quick: | ||
| test_instance.runner = None # Will be created in setup with different settings | ||
| async def set_value(self, key: str, value): | ||
| self._db[key] = value | ||
|
|
||
| async def get_value(self, key: str): | ||
| return self._db.get(key) |
There was a problem hiding this comment.
Missing super().__init__() call in DatabaseSingleton.
ProxiedSingleton has an __init__ method that should be called to ensure proper singleton initialization.
🔧 Proposed fix
class DatabaseSingleton(ProxiedSingleton):
"""Simple dictionary-based singleton for testing state."""
def __init__(self):
+ super().__init__()
self._db = {}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class DatabaseSingleton(ProxiedSingleton): | |
| """Simple dictionary-based singleton for testing state.""" | |
| def __init__(self): | |
| self._db = {} | |
| # Override benchmark runner settings for quick mode | |
| if quick: | |
| test_instance.runner = None # Will be created in setup with different settings | |
| async def set_value(self, key: str, value): | |
| self._db[key] = value | |
| async def get_value(self, key: str): | |
| return self._db.get(key) | |
| class DatabaseSingleton(ProxiedSingleton): | |
| """Simple dictionary-based singleton for testing state.""" | |
| def __init__(self): | |
| super().__init__() | |
| self._db = {} | |
| async def set_value(self, key: str, value): | |
| self._db[key] = value | |
| async def get_value(self, key: str): | |
| return self._db.get(key) |
🤖 Prompt for AI Agents
In `@benchmarks/benchmark.py` around lines 39 - 48, DatabaseSingleton's __init__
doesn't call ProxiedSingleton.__init__, so the base-class singleton setup isn't
run; update DatabaseSingleton.__init__ to call super().__init__() (typically as
the first statement) before initializing self._db to ensure ProxiedSingleton's
initialization logic executes.
| class DatabaseSingleton(ProxiedSingleton): | ||
| def __init__(self): | ||
| self._db = {} | ||
| async def set_value(self, key, value): pass | ||
| async def get_value(self, key): pass |
There was a problem hiding this comment.
Child-side DatabaseSingleton stub also missing super().__init__().
For consistency and correct singleton behavior, the child-side stub should also call super().__init__().
🔧 Proposed fix in BENCHMARK_EXTENSION_CODE
class DatabaseSingleton(ProxiedSingleton):
def __init__(self):
+ super().__init__()
self._db = {}
async def set_value(self, key, value): pass
async def get_value(self, key): pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class DatabaseSingleton(ProxiedSingleton): | |
| def __init__(self): | |
| self._db = {} | |
| async def set_value(self, key, value): pass | |
| async def get_value(self, key): pass | |
| class DatabaseSingleton(ProxiedSingleton): | |
| def __init__(self): | |
| super().__init__() | |
| self._db = {} | |
| async def set_value(self, key, value): pass | |
| async def get_value(self, key): pass |
🤖 Prompt for AI Agents
In `@benchmarks/benchmark.py` around lines 84 - 88, The DatabaseSingleton child
stub's __init__ is missing a call to its parent initializer which can break
ProxiedSingleton setup; update DatabaseSingleton.__init__ to call
super().__init__() (e.g., begin the method with super().__init__()), leaving the
existing self._db initialization and the async stubs set_value/get_value
unchanged so singleton behavior is consistent with ProxiedSingleton.
| async def func(): | ||
| return await ext.do_stuff(data_val) |
There was a problem hiding this comment.
Closure captures loop variable by reference, causing all benchmarks to use last value.
The func closure captures data_val by reference. By the time the benchmark runs, all closures will use the final data_val from the loop. This causes incorrect benchmark results.
🔧 Proposed fix using default argument capture
for data_name, data_val in test_data:
bench_name = f"{name}_{data_name}"
- async def func():
- return await ext.do_stuff(data_val)
+ async def func(val=data_val):
+ return await ext.do_stuff(val)
print(f"Running {bench_name}...")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def func(): | |
| return await ext.do_stuff(data_val) | |
| async def func(val=data_val): | |
| return await ext.do_stuff(val) |
🤖 Prompt for AI Agents
In `@benchmarks/benchmark.py` around lines 300 - 301, The closure `func` captures
the loop variable `data_val` by reference so every created closure ends up using
the final loop value; fix this by binding the current `data_val` into the
closure signature (or use a small factory) so each `func` captures its own value
before the loop advances—e.g. make `func` accept a default parameter set to the
current `data_val` and call `ext.do_stuff` with that parameter (target symbols:
func, data_val, ext.do_stuff).
| class DummyExtension(Extension): | ||
| def __init__(self, tmp_path: Path, config_overrides=None): | ||
| base_config = { | ||
| "name": "demo", | ||
| "dependencies": [], | ||
| "share_torch": True, | ||
| "share_cuda_ipc": False, | ||
| "apis": [], | ||
| } | ||
| if config_overrides: | ||
| base_config.update(config_overrides) | ||
| super().__init__( | ||
| module_path="/tmp/mod.py", | ||
| extension_type=SimpleNamespace, | ||
| config=base_config, | ||
| venv_root_path=str(tmp_path), | ||
| ) |
There was a problem hiding this comment.
Avoid hard‑coded module paths in DummyExtension.
/tmp/mod.py is platform-specific and conflicts with the “no hard‑coded paths” objective.
🔧 Suggested fix
super().__init__(
- module_path="/tmp/mod.py",
+ module_path=str(tmp_path / "mod.py"),
extension_type=SimpleNamespace,
config=base_config,
venv_root_path=str(tmp_path),
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class DummyExtension(Extension): | |
| def __init__(self, tmp_path: Path, config_overrides=None): | |
| base_config = { | |
| "name": "demo", | |
| "dependencies": [], | |
| "share_torch": True, | |
| "share_cuda_ipc": False, | |
| "apis": [], | |
| } | |
| if config_overrides: | |
| base_config.update(config_overrides) | |
| super().__init__( | |
| module_path="/tmp/mod.py", | |
| extension_type=SimpleNamespace, | |
| config=base_config, | |
| venv_root_path=str(tmp_path), | |
| ) | |
| class DummyExtension(Extension): | |
| def __init__(self, tmp_path: Path, config_overrides=None): | |
| base_config = { | |
| "name": "demo", | |
| "dependencies": [], | |
| "share_torch": True, | |
| "share_cuda_ipc": False, | |
| "apis": [], | |
| } | |
| if config_overrides: | |
| base_config.update(config_overrides) | |
| super().__init__( | |
| module_path=str(tmp_path / "mod.py"), | |
| extension_type=SimpleNamespace, | |
| config=base_config, | |
| venv_root_path=str(tmp_path), | |
| ) |
🤖 Prompt for AI Agents
In `@tests/test_host_internal_ext.py` around lines 64 - 80, The DummyExtension
__init__ currently hard-codes module_path as "/tmp/mod.py"; change it to derive
the path from the tmp_path fixture instead (e.g., use tmp_path / "mod.py" and
convert to str) so module_path is platform-independent; update the module_path
argument in the super().__init__ call within DummyExtension.__init__ to use the
computed tmp_path-based value and ensure any callers still pass tmp_path as
before.
| def _install_dependencies(self): | ||
| return | ||
|
|
There was a problem hiding this comment.
Tests for dependency installation are currently no‑ops.
Overriding _install_dependencies to return immediately means test_install_dependencies_* never exercises the real logic.
🔧 Suggested fix (delegate to base method)
def _install_dependencies(self):
- return
+ return super()._install_dependencies()🤖 Prompt for AI Agents
In `@tests/test_host_internal_ext.py` around lines 88 - 90, The override of
_install_dependencies in tests/test_host_internal_ext.py is a no-op and prevents
test_install_dependencies_* from exercising real logic; change the method to
delegate to the base implementation (call super()._install_dependencies() or the
appropriate base class method) instead of immediately returning so the inherited
installation logic runs during tests.
| venv_path = Path(tmp_path) / "demo" | ||
| site_packages = venv_path / "lib" / "python3.12" / "site-packages" | ||
| site_packages.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| python_exe = venv_path / "bin" / "python" | ||
| python_exe.parent.mkdir(parents=True, exist_ok=True) | ||
| python_exe.write_text("#!/usr/bin/env python") | ||
| python_exe.chmod(0o755) |
There was a problem hiding this comment.
Derive venv layout from platform/Python version.
Hardcoding python3.12 and bin/python breaks on other versions and Windows.
🔧 Suggested fix
- site_packages = venv_path / "lib" / "python3.12" / "site-packages"
+ pyver = f"python{sys.version_info.major}.{sys.version_info.minor}"
+ lib_dir = "Lib" if sys.platform == "win32" else "lib"
+ site_packages = venv_path / lib_dir / pyver / "site-packages"
- python_exe = venv_path / "bin" / "python"
+ bin_dir = "Scripts" if sys.platform == "win32" else "bin"
+ python_name = "python.exe" if sys.platform == "win32" else "python"
+ python_exe = venv_path / bin_dir / python_name- venv_bin = Path(ext.venv_path / "bin")
+ bin_dir = "Scripts" if sys.platform == "win32" else "bin"
+ venv_bin = Path(ext.venv_path / bin_dir)
@@
- exe = venv_bin / "python"
+ exe = venv_bin / ("python.exe" if sys.platform == "win32" else "python")Also applies to: 241-244, 260-263
🤖 Prompt for AI Agents
In `@tests/test_host_internal_ext.py` around lines 219 - 226, The test hardcodes
POSIX venv paths ("lib/python3.12/site-packages" and "bin/python") which break
on other Python versions and Windows; update the setup to derive paths from the
running platform and Python version by computing py_ver =
f"python{sys.version_info.major}.{sys.version_info.minor}" and branching on
platform.system(): on Windows set site_packages = venv_path / "Lib" /
"site-packages" and python_exe = venv_path / "Scripts" / "python.exe", otherwise
set site_packages = venv_path / "lib" / py_ver / "site-packages" and python_exe
= venv_path / "bin" / "python" (use these variables in place of the hardcoded
ones for the other occurrences at lines noted); reference variables venv_path,
site_packages, python_exe when making the change.
| def base_config(tmp_path): | ||
| return { | ||
| "name": "demo", | ||
| "module_path": "/tmp/mod.py", | ||
| "dependencies": [], | ||
| "share_torch": True, | ||
| "share_cuda_ipc": False, | ||
| "apis": [], | ||
| "venv_root_path": str(tmp_path), | ||
| } |
There was a problem hiding this comment.
Avoid hard‑coded module paths in tests.
/tmp/mod.py is platform-specific and contradicts the PR objective to remove hard-coded paths.
🔧 Suggested fix
def base_config(tmp_path):
return {
"name": "demo",
- "module_path": "/tmp/mod.py",
+ "module_path": str(tmp_path / "mod.py"),
"dependencies": [],
"share_torch": True,
"share_cuda_ipc": False,
"apis": [],
"venv_root_path": str(tmp_path),
}🤖 Prompt for AI Agents
In `@tests/test_host_public.py` around lines 47 - 56, The test uses a hard-coded
platform-specific module path ("/tmp/mod.py") in base_config; change base_config
to compute "module_path" from the tmp_path fixture (e.g., create a file path
under tmp_path such as tmp_path/"mod.py" and set "module_path" to its string) so
tests are portable across platforms and CI; update the base_config function to
derive and return "module_path" from the tmp_path argument instead of the
literal "/tmp/mod.py".
| def test_async_rpc_send_thread_sets_exception_on_send_failure(): | ||
| loop = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop) | ||
|
|
||
| class FailingQueue: | ||
| def put(self, _): | ||
| raise RuntimeError("boom") | ||
|
|
||
| recv_q: queue.Queue = queue.Queue() | ||
| rpc = AsyncRPC(recv_queue=recv_q, send_queue=FailingQueue()) | ||
|
|
||
| pending = RPCPendingRequest( # type: ignore[call-arg] | ||
| kind="call", | ||
| object_id="obj", | ||
| parent_call_id=None, | ||
| calling_loop=loop, | ||
| future=loop.create_future(), | ||
| method="ping", | ||
| args=(), | ||
| kwargs={}, | ||
| ) | ||
| rpc.outbox.put(pending) | ||
| rpc.outbox.put(None) | ||
|
|
||
| rpc._send_thread() | ||
| loop.run_until_complete(asyncio.sleep(0)) | ||
| assert pending["future"].done() is True | ||
| with pytest.raises(RuntimeError): | ||
| pending["future"].result() | ||
| loop.close() | ||
|
|
There was a problem hiding this comment.
Reset the event loop after closing.
Leaving a closed loop as the current loop can break later tests.
🔧 Suggested fix
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
+ loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(loop)
@@
- loop.close()
+ loop.close()
+ asyncio.set_event_loop(None)- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
+ loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(loop)
@@
- loop.close()
+ loop.close()
+ asyncio.set_event_loop(None)Also applies to: 88-118
🤖 Prompt for AI Agents
In `@tests/test_shared_additional.py` around lines 56 - 86, The test creates and
sets a new event loop (asyncio.new_event_loop() + asyncio.set_event_loop(loop))
but doesn't restore the global event loop after closing it; update
test_async_rpc_send_thread_sets_exception_on_send_failure to call
asyncio.set_event_loop(None) (or reset to the previous loop) immediately after
loop.close() so a closed loop is not left as the current event loop, and apply
the same change to the other similar test (lines 88-118) that follows the same
pattern; reference the test function name and ensure the cleanup happens at the
end of each test after loop.close().
Fix mypy error in torch_utils.py, enable mypy in CI
There was a problem hiding this comment.
Actionable comments posted: 21
🤖 Fix all issues with AI agents
In @.github/workflows/pytorch.yml:
- Line 17: Update the documentation strings that still claim Python 3.9 support
to reflect the new minimum of Python 3.10: edit README.md and README_COMFYUI.md
to change occurrences of "Python 3.9+" to "Python 3.10+" and edit
BENCHMARK_INSTRUCTIONS.md to change "Python 3.9 or higher" to "Python 3.10 or
higher" (search for those exact phrases to locate the lines).
In @.pre-commit-config.yaml:
- Around line 14-19: The pre-commit hook for Ruff (repo
https://github.com/astral-sh/ruff-pre-commit, rev v0.14.0) currently passes the
--unsafe-fixes flag on the hook with id: ruff; confirm whether you intend to
apply unsafe fixes on every commit — if not, remove --unsafe-fixes from the args
for the ruff hook (leaving ruff-format as-is) or else move the unsafe option
into a separate, opt-in hook (e.g., a new ruff-unsafe entry) or CI job that runs
only on demand and document its usage so unsafe fixes are not auto-applied on
all commits.
In `@pyisolate/__init__.py`:
- Around line 16-28: The docstring example incorrectly treats
ExtensionManagerConfig and ExtensionConfig as constructors and calls the sync
API with await; change the example to use plain dict literals for
ExtensionManagerConfig and ExtensionConfig and call
ExtensionManager.load_extension without await (i.e., call
manager.load_extension(...) synchronously). Update the shown instantiation of
manager and the load_extension usage to reflect using dicts for the configs and
a normal function call to load_extension.
In `@pyisolate/_internal/client.py`:
- Around line 104-109: The adapter is being handed the class instead of the
injected remote instance because getattr(api, "instance", api) falls back to the
class; after calling api.use_remote(rpc) fetch the injected instance explicitly
and pass that to the adapter. Replace getattr(api, "instance", api) with
something like instance = getattr(api, "instance") (or getattr(api, "instance",
None") and error if None), cast it to ProxiedSingleton, and call
_adapter.handle_api_registration(instance, rpc) so _adapter receives the actual
proxied instance rather than the class; keep references to config["apis"],
api.use_remote, and _adapter.handle_api_registration when making the change.
In `@pyisolate/_internal/environment.py`:
- Around line 286-289: The loop building safe_deps currently appends empty or
whitespace-only entries from config["dependencies"] even though
validate_dependency returns early for empty strings; update the loop in
environment.py that iterates over config["dependencies"] to strip each dep, skip
if the stripped value is empty (or falsy), then call validate_dependency and
append the cleaned dep to safe_deps (referring to the variables safe_deps, dep,
validate_dependency, and config["dependencies"] to locate the change).
In `@pyisolate/_internal/rpc_transports.py`:
- Around line 352-367: The enum deserialization currently imports arbitrary
modules using importlib.import_module(module_name) when
dct["__pyisolate_enum__"] is set; replace that behavior with a safe allowlist or
registry lookup: maintain a registry mapping known module/type names to Enum
classes (e.g., a dict keyed by (module_name, type_name) or by a canonical enum
identifier) and use that registry to resolve enum_type instead of
importlib.import_module; if the registry lookup fails, fall back to returning
dct.get("value") as currently done; update the branch that handles
"__pyisolate_enum__" (the logic that sets module_name, type_name, enum_name and
computes enum_type/getattr) to consult the registry instead of importing
arbitrary modules.
In `@pyisolate/_internal/sandbox_detect.py`:
- Around line 26-31: Remove the unused enum member SandboxMode.PREFERRED from
the SandboxMode class in sandbox_detect.py: edit the SandboxMode definition to
only include DISABLED and REQUIRED, and update any related docstring/comments to
match the public API in config.py so the internal enum mirrors the external
design; ensure no other code references SandboxMode.PREFERRED remain (remove or
refactor any such references if found).
In `@pyisolate/_internal/sandbox.py`:
- Around line 90-99: When restriction_model != RestrictionModel.NONE the code
silently falls back to degraded mode; add a log entry in that else branch to
warn operators. Update the else block (where restriction_model is compared to
RestrictionModel.NONE) to obtain a module logger (e.g.
logging.getLogger(__name__)) or use the existing logger if present, and emit a
warning/debug message that user/PID namespace isolation was skipped and the
process is running in degraded mode; include the value of restriction_model and
any relevant flags (cmd) in the message for auditability.
- Around line 182-196: The bare except around the comfy import swallows all
exceptions; change it to only catch ImportError and ModuleNotFoundError when
importing comfy (leave other exceptions to propagate so
KeyboardInterrupt/SystemExit aren't swallowed), and if the import fails emit a
debug-level log message rather than silent pass; update the try/except that
surrounds "import comfy" (affecting comfy, comfy_path and the cmd.extend call)
to except (ImportError, ModuleNotFoundError) as e: and call the module logger
(e.g. logger.debug or logging.getLogger(__name__).debug) with a short message
including the exception text.
In `@pyisolate/_internal/tensor_serializer.py`:
- Around line 234-241: The registry.register calls in register_tensor_serializer
currently pass None as the serializer for "TensorRef" and "TorchReduction",
which will break when the registry expects a callable; replace those None values
with an identity serializer (a callable that returns its input, e.g.,
identity(x) -> x) so registry.register("TensorRef", identity,
deserialize_tensor) and registry.register("TorchReduction", identity,
deserialize_tensor) are used; update the register_tensor_serializer function to
import/define and use that identity function instead of None.
- Around line 63-96: _in _serialize_cpu_tensor, guard against infinite recursion
when forcing the "file_system" sharing strategy: after detecting sfunc.__name__
== "rebuild_storage_fd" and calling
torch.multiprocessing.set_sharing_strategy("file_system") and t.share_memory_(),
call reductions.reduce_storage(storage) (or re-run the same storage reduction
step) once more and if it STILL returns "rebuild_storage_fd" then raise a clear
exception (e.g., RuntimeError) indicating the file_system sharing retry failed
instead of recursing; reference the symbols _serialize_cpu_tensor,
reductions.reduce_storage, and the "rebuild_storage_fd" branch to locate the
change.
In `@pyisolate/config.py`:
- Around line 10-20: The file defines a duplicate SandboxMode enum (class
SandboxMode in pyisolate/config.py) that conflicts with SandboxMode in
pyisolate/_internal/sandbox_detect.py (which has DISABLED, PREFERRED, REQUIRED);
remove the local enum and instead re-export the canonical enum from
sandbox_detect (e.g., import SandboxMode from pyisolate._internal.sandbox_detect
and expose it via the same name) so all modules use the single source of truth;
alternatively, if PREFERRED is truly unused, delete it from sandbox_detect and
keep a single enum—references: SandboxMode in pyisolate/config.py and
SandboxMode in pyisolate/_internal/sandbox_detect.py.
In `@pyisolate/host.py`:
- Around line 60-69: The proxy property currently inspects the private flag
_extension._process_initialized; change this to call a new public status method
on the Extension (e.g., is_process_started() or needs_restart()) instead of
reading private state directly. Add the public method to the Extension
implementation(s) so Host/HostExtension can query process state, then update the
proxy property to use that method before invalidating _proxy and before calling
ensure_process_started() and get_proxy(); keep the existing
ensure_process_started and get_proxy calls intact.
In `@pyisolate/path_helpers.py`:
- Around line 93-100: The add_path closure duplicates logic found elsewhere;
extract it into a shared helper function (e.g., a top-level function named
add_path or add_normalized_path) that accepts (path: str, seen: Set[str],
result: List[str], normalizer: Callable[[str], str] = _norm) and encapsulates
the checks and mutations currently in add_path, then replace the local closures
in both modules with calls to this shared helper; ensure you preserve behavior
by using the same _norm function (or inject it) and update imports in modules
that used the closure to import the new utility.
In `@pyproject.toml`:
- Around line 28-31: The dependency constraint for the package "uv" in
pyproject.toml is too loose (currently "uv>=0.1.0"); tighten it to a safer
minimum (for example "uv>=0.4.0" or the lowest release that supports the
project's usage of uv venv and uv pip install) by updating the dependency string
for "uv" in pyproject.toml; verify the chosen minimum version actually supports
the uv venv and uv pip install commands used in the codebase before committing.
In `@README.md`:
- Around line 64-69: The fenced code block under "Expected output" is missing a
language identifier; update the triple backticks that surround the example table
so they read ```text instead of ``` to mark it as plain text (i.e., change the
opening fence for the block containing "Extension1 | ✓ PASSED ..." to ```text).
- Around line 19-36: Add blank lines before and after the Markdown headings
"Core Variables (Set by PyIsolate automatically)" and "Debug Variables (Set by
user)" and ensure there is an empty line both above and below each fenced code
block (the bash example and any other ``` blocks) so the README renders
correctly across parsers; edit the README.md to insert these blank lines around
the specified headings and around every fenced code block (e.g., the bash
example) to fix the spacing issues.
In `@tests/conftest.py`:
- Around line 17-22: The current fallback COMFYUI_ROOT =
os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") uses a hard-coded
default; remove that fallback and make COMFYUI_ROOT strictly come from the
environment (os.environ.get("COMFYUI_ROOT")), and if it's missing raise a clear
error (RuntimeError or sys.exit with a descriptive message) instead of
defaulting; keep the sys.path insertion (sys.path.insert(0, COMFYUI_ROOT)) and
the environment export (os.environ.setdefault("COMFYUI_ROOT", COMFYUI_ROOT))
only after the presence check so child processes still see the variable.
In `@tests/fixtures/test_adapter.py`:
- Around line 21-26: Tests and examples use hard-coded "/tmp" paths; update the
fixture to require an explicit, platform-agnostic root_path (or accept one from
the test harness) instead of defaulting to "/tmp/myapp" and change example docs
to "/path/to/...". Specifically, modify MockHostAdapter instantiation and any
calls to get_path_config (e.g., MockHostAdapter(root_path=...),
adapter.get_path_config(...)) so tests provide a supplied root_path (or use
tempfile.gettempdir()/pathlib.Path from the harness) and update the example
return/comment to use "/path/to/..." rather than "/tmp".
In `@tests/test_extension_lifecycle.py`:
- Around line 24-76: The tests build ExtensionConfig dicts (e.g., in
test_config_requires_name, test_config_requires_module_path,
test_config_with_dependencies, test_config_share_torch) but omit required
TypedDict fields (sandbox, sandbox_mode, env), causing type-check failures;
update each ExtensionConfig literal to include sandbox (e.g., False),
sandbox_mode (e.g., "none" or appropriate default), and env (e.g., {}), and
mirror the same additions for the other test blocks mentioned so all
ExtensionConfig dicts contain all 10 required keys.
In `@tests/test_extension_safety.py`:
- Around line 5-6: Replace the indirect import "from pyisolate._internal import
host" with direct imports from the defining module
"pyisolate._internal.environment": import the specific safety helper symbols
used in this test (the helpers referenced via host in
tests/test_extension_safety.py) from environment and update usages to call them
directly (remove host.<helper> indirection); this keeps the test coupled to the
helpers' real module rather than a re-export.
♻️ Duplicate comments (71)
benchmarks/benchmark.py (4)
39-42: Call the ProxiedSingleton initializer.
DatabaseSingletonshould invoke the base initializer to ensure singleton setup runs.🔧 Proposed fix
class DatabaseSingleton(ProxiedSingleton): """Simple dictionary-based singleton for testing state.""" def __init__(self): + super().__init__() self._db = {}
83-88: Child-side DatabaseSingleton also needs base init.Keep child-side singleton initialization consistent with the host.
🔧 Proposed fix
class DatabaseSingleton(ProxiedSingleton): def __init__(self): + super().__init__() self._db = {} async def set_value(self, key, value): pass async def get_value(self, key): pass
296-301: Fix closure capture of loop variable.The closure currently binds
data_valby reference, so all benchmarks use the last value.🔧 Proposed fix
for data_name, data_val in test_data: bench_name = f"{name}_{data_name}" - async def func(): - return await ext.do_stuff(data_val) + async def func(val=data_val): + return await ext.do_stuff(val)
348-351: Return and propagate the benchmark exit code.
main()discards the run result, so the process always exits 0 on success/failure.🔧 Proposed fix
- asyncio.run(run_benchmarks(args.quick, args.no_torch, args.no_gpu, args.torch_mode)) + return asyncio.run(run_benchmarks(args.quick, args.no_torch, args.no_gpu, args.torch_mode)) if __name__ == "__main__": - main() + sys.exit(main() or 0)benchmarks/benchmark_harness.py (1)
69-98:load_extensionsis a no-op and silently returns nothing.Benchmarks calling this will do no work. Implement real loading or raise a clear error.
🔧 Proposed implementation
async def load_extensions(self, extension_configs: list[dict], extension_base_cls) -> list: """Load extensions defined in configs.""" config = ExtensionManagerConfig(venv_root_path=str(self.test_root / "extension-venvs")) self.manager = ExtensionManager(extension_base_cls, config) loaded_extensions = [] for cfg in extension_configs: name = cfg["name"] - # Config might be passed as simple dict - - # Reconstruct dependencies if not passed mostly for existing pattern in benchmark.py - # But create_extension handles writing to disk. loading needs ExtensionConfig object. - - # This is slightly tricky because creation and loading are split in benchmark.py - # I'll rely on the caller to pass correct params or infer them? - # Actually benchmark.py logic: create_extension then load_extensions loop. - - # Since we know the path structure from create_extension: - module_path = str(self.test_root / "extensions" / name) - - # NOTE: benchmark.py passed deps to create_extension but strangely not to load_extensions - # We must pass them here to ExtensionConfig. - # Ideally load_extensions accepts full config objects or we recreate them. - # I will adapt this to match what benchmark.py expects or refactor benchmark.py to iterate. - - # Simpler approach: Allow caller to just use manager directly if they want, - # or provide a helper that does what benchmark.py did (but correctly). - pass - - return loaded_extensions # placeholder, I will implement explicit loading in the script + module_path = cfg.get("module_path") or str(self.test_root / "extensions" / name) + ext_cfg = ExtensionConfig( + name=name, + module_path=module_path, + isolated=cfg.get("isolated", True), + dependencies=cfg.get("dependencies", []), + apis=cfg.get("apis", []), + share_torch=cfg.get("share_torch", False), + ) + loaded_extensions.append(self.manager.load_extension(ext_cfg)) + + return loaded_extensions.github/workflows/windows.yml (1)
17-17: Documentation mismatch with CI matrix.Python 3.9 has been removed from the CI test matrix, but README files may still reference "Python 3.9+" support. Ensure documentation reflects the actual minimum supported version (3.10).
tests/conftest.py (1)
34-47: Redundant manual restoration and unused attribute.The
monkeypatchfixture automatically revertssetattrcalls on teardown, making the explicit restoration on line 47 unnecessary. Additionally,_orig_launch(line 35) is only used for this redundant restoration and could be removed along with it.♻️ Suggested simplification
- original_launch = host_internal.Extension._Extension__launch - host_internal.Extension._orig_launch = original_launch # type: ignore[attr-defined] - def dummy_launch(self): return SimpleNamespace( is_alive=lambda: False, @@ -45,4 +42,3 @@ monkeypatch.setattr(host_internal.Extension, "_Extension__launch", dummy_launch) yield - monkeypatch.setattr(host_internal.Extension, "_Extension__launch", original_launch)tests/test_torch_utils_additional.py (2)
6-16: Add cache cleanup in teardown to prevent test pollution.The cache is cleared before the test, but the mocked result remains cached after the test completes. This can cause test order-dependent failures if other tests rely on the real
get_torch_ecosystem_packagesbehavior.♻️ Suggested fix using try/finally
def test_get_torch_ecosystem_packages_includes_distributions(monkeypatch): def fake_distributions(): meta = SimpleNamespace(metadata={"Name": "nvidia-cublas"}) meta2 = SimpleNamespace(metadata={"Name": "torch-hub"}) return [meta, meta2] torch_utils.get_torch_ecosystem_packages.cache_clear() monkeypatch.setattr(torch_utils.importlib_metadata, "distributions", fake_distributions) - pkgs = torch_utils.get_torch_ecosystem_packages() - assert "nvidia-cublas" in pkgs - assert "torch-hub" in pkgs + try: + pkgs = torch_utils.get_torch_ecosystem_packages() + assert "nvidia-cublas" in pkgs + assert "torch-hub" in pkgs + finally: + torch_utils.get_torch_ecosystem_packages.cache_clear()
19-26: Same cache cleanup issue applies here.Apply the same
try/finallypattern to clear the cache after the test completes.README_COMFYUI.md (6)
38-42: Fix typo:git install .should bepip install .This typo will cause installation to fail for users following the documentation.
📝 Proposed fix
Clone from pollockjj's repo: git clone https://github.com/pollockjj/pyisolate cd pyisolate -git install . +pip install .
141-143: Fix typo:pysiolate.yamlshould bepyisolate.yamlThe manifest filename is misspelled.
📝 Proposed fix
✅ **Standard Python code execution:** -- Any standard Python code inside node functions using Comfy standard imports and each custom_node's pysiolate.yaml's dependencies +- Any standard Python code inside node functions using Comfy standard imports and each custom_node's pyisolate.yaml's dependencies
202-202: Broken internal link:#appendix-supported-apissection does not exist.Either add the referenced appendix section or remove/update this link.
244-244: Broken internal link:#advanced-web-routessection does not exist.The workaround references a section that isn't present in the document.
305-307: Fix broken link fragment and improve troubleshooting guidance.The link
#what-doesnt-work-yetshould be#what-doesnt-workto match the actual heading on Line 233.📝 Proposed fix
### Node works non-isolated but fails isolated -**Cause:** Likely using a pattern that doesn't work with isolation (see [What Doesn't Work](`#what-doesnt-work-yet`)). +**Cause:** Likely using a pattern that doesn't work with isolation (see [What Doesn't Work](`#what-doesnt-work`)). **Fix:** Check logs for specific error, review the node's `__init__.py` for module-level side effects.
82-91: Add language specifiers to code blocks for proper syntax highlighting.Several log output blocks lack language specifiers. Adding
textorlogimproves readability.📝 Example fix
PyIsolate and internal functions that use it use a "][" as log prefix. -``` +```text ][ ComfyUI-IsolationTest cache miss, spawning process for metadata # First run or cache invalidationApply similar changes to code blocks at lines 95 and 117.
tests/integration_v2/test_tensors.py (1)
1-9: Guard torch import withpytest.importorskip()to prevent collection failures.The unconditional
import torchon line 2 will crash test collection in environments without torch, before theskipifdecorators can take effect. Usepytest.importorskip()for consistency with how numpy is guarded.📝 Proposed fix
import pytest -import torch +torch = pytest.importorskip("torch") try: import numpy as np # noqa: F401tests/test_rpc_shutdown.py (2)
37-65: Remove unusedBlockingMockTransportclass.This helper is not referenced anywhere in the test file or codebase and adds unnecessary code.
♻️ Proposed cleanup
-class BlockingMockTransport(RPCTransport): - """Transport that allows controlling recv blocking.""" - - def __init__(self): - self.recv_queue = asyncio.Queue() - self.closed = False - - def send(self, obj): - pass - - def recv(self): - # This will be called in a thread - if self.closed: - raise ConnectionError("Closed") - # Block until item available - # Since we can't easily block in a non-async way without - # actual threading primitives, we'll just simulate a quick - # loop check or similar. - # But actually, the RPC implementation calls transport.recv() - # which is synchronous. - import time - - while not self.closed: - time.sleep(0.01) - raise ConnectionError("Closed during block") - - def close(self): - self.closed = True - -
112-113: Remove duplicate assertion.Lines 112 and 113 assert the identical condition. Keep only one.
♻️ Proposed cleanup
assert len(debug_logs) > 0, f"Should have logged debug message. Got: {all_rpc_logs}" assert "Socket closed" in debug_logs[0].message - assert "Socket closed" in debug_logs[0].messagepyisolate/path_helpers.py (1)
77-82: Hardcoded ComfyUI subdirectories still present.Per PR objectives item
#3, the ComfyUI root detection should be made configurable. These hardcoded subdirectory names couple this module to ComfyUI structure. This was flagged in a previous review.README.md (2)
11-11: Python version requirement inconsistent with pyproject.toml.README states Python 3.9+ but
pyproject.tomlrequires>=3.10. This was flagged in a previous review.
79-80: Documentation links still have issues.Line 79 has a bare URL that should be a markdown link, and line 80 has
mysolate/which appears to be a typo forpyisolate/. These were flagged in a previous review.pyisolate/config.py (1)
54-71: New ExtensionConfig keys should be optional.The fields
apis,share_torch,share_cuda_ipc,sandbox,sandbox_mode, andenvare defined as required keys, but runtime code handles them as optional (e.g., using.get()with defaults). This causes type errors when constructing configs that omit these fields.Mark these as
NotRequiredusingtyping_extensions. This was flagged in a previous review.pyisolate/_internal/sandbox.py (2)
148-155: Confusing CUDA path skip logic.The comment and condition are contradictory. It says "Skip if already covered by /usr bind" but then the condition explicitly excludes
/usr/local/paths. This was flagged in a previous review.
268-269: Duplicate entry in CUDA environment variable list.
PYISOLATE_ENABLE_CUDA_IPCappears twice. This was flagged in a previous review.tests/test_remote_handle.py (1)
9-31: LGTM: Basic contract tests are correct.The tests verify the fundamental
RemoteObjectHandlebehavior. Consider adding a pickle round-trip test as suggested in previous reviews, since the class explicitly sets__module__for pickle compatibility.Optional: Add pickle round-trip test
import pickle def test_remote_handle_pickle_roundtrip(self): """RemoteObjectHandle should survive pickle round-trip.""" handle = RemoteObjectHandle("obj_456", "SomeType") pickled = pickle.dumps(handle) restored = pickle.loads(pickled) assert restored.object_id == handle.object_id assert restored.type_name == handle.type_namepyisolate/_internal/rpc_transports.py (2)
93-94:ConnectionTransport.recv()still lacks thread-safety protection.The
RPCTransportprotocol explicitly requires "thread-safe send/recv operations" (line 39), andsend()is protected withself._lock, butrecv()remains unprotected. Sincemultiprocessing.Connectionis not thread-safe for concurrent reads, multiple threads callingrecv()will race on the underlying socket.Proposed fix
class ConnectionTransport: def __init__(self, conn: Connection) -> None: self._conn = conn self._lock = threading.Lock() + self._recv_lock = threading.Lock() def recv(self) -> Any: - return self._conn.recv() + with self._recv_lock: + return self._conn.recv()
369-417: Arbitrary class instantiation from untrusted JSON creates RCE vulnerability.The
__pyisolate_object__handler performsimportlib.import_module(module_name)andgetattr(module, type_name)followed bycls()orcls.__new__(cls)using values from child process JSON. A malicious child can craft payloads to instantiate dangerous classes (e.g., classes with side effects in__init__or__new__).This reintroduces the pickle-style RCE that
JSONSocketTransportwas designed to prevent (see docstring at lines 104-105).Proposed hardening: Use registry-only deserialization
if dct.get("__pyisolate_object__"): - import importlib - data = dct.get("data", {}) module_name = dct.get("module") type_name = dct.get("type") - # Try to reconstruct the original class - if module_name and type_name: - try: - module = importlib.import_module(module_name) - cls = getattr(module, type_name, None) - if cls is not None: - # ... dangerous instantiation logic ... - except Exception: - pass + # Only allow explicitly registered deserializers + from .serialization_registry import SerializerRegistry + registry = SerializerRegistry.get_instance() + deserializer = registry.get_deserializer(type_name) if type_name else None + if deserializer: + try: + return deserializer(dct) + except Exception as e: + logger.warning("Failed to deserialize %s: %s", type_name, e) # Fallback: return SimpleNamespace with metadata ns = SimpleNamespace(**data) ns.__pyisolate_type__ = type_name ns.__pyisolate_module__ = module_name return nspyisolate/_internal/host.py (3)
197-200: Unconditionaltorchimport will crash when torch is not installed.Lines 198-200 unconditionally import torch and call
torch.multiprocessing.set_sharing_strategy(). Whenshare_torch=Falseand torch isn't installed, this crashes the host process. The import should be guarded.Proposed fix
- # Ensure file_system strategy for CPU tensors - import torch - - torch.multiprocessing.set_sharing_strategy("file_system") + # Ensure file_system strategy for CPU tensors (only when sharing torch) + if self.config.get("share_torch", False): + try: + import torch + torch.multiprocessing.set_sharing_strategy("file_system") + except ImportError: + raise RuntimeError("share_torch=True requires torch to be installed")
300-308:tempfile.mktemp()is deprecated and creates TOCTOU vulnerability.The use of
mktemp()at line 302 creates a race condition where another process can create a file at the returned path beforesocket.bind(). This is flagged by static analysis (CWE-377).Safer approach using secure directory
if has_af_unix(): run_dir = ensure_ipc_socket_dir() - uds_path = tempfile.mktemp(prefix="ext_", suffix=".sock", dir=str(run_dir)) + # Create socket path in secure directory (ensure_ipc_socket_dir returns 0o700 dir) + import uuid + uds_path = os.path.join(str(run_dir), f"ext_{uuid.uuid4().hex}.sock") listener_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) listener_sock.bind(uds_path)
485-487:self.proc.join()uses wrong API forsubprocess.Popen.
self.procis asubprocess.Popeninstance (created at line 420), not amultiprocessing.Process.Popendoesn't have ajoin()method—this will raiseAttributeErrorat runtime. Usewait()instead.Proposed fix
def join(self) -> None: """Join the child process, blocking until it exits.""" - self.proc.join() + self.proc.wait()pyisolate/_internal/sandbox_detect.py (1)
136-174: Hard-coded paths in_test_bwrapmay fail on non-standard Linux layouts.The bwrap test assumes
/lib,/lib64,/bin,/usr, and/usr/bin/trueexist. Some minimal or merged-usr systems may have different layouts (e.g., no separate/lib64, ortrueat/bin/true).Suggested improvement for broader compatibility
def _test_bwrap(bwrap_path: str) -> tuple[bool, str]: try: + # Build bind paths dynamically based on what exists + bind_args = [] + for path in ["/usr", "/bin", "/lib", "/lib64"]: + if os.path.exists(path): + bind_args.extend(["--ro-bind", path, path]) + + # Find a true binary that exists + true_binary = shutil.which("true") or "/usr/bin/true" + result = subprocess.run( [ bwrap_path, "--unshare-user-try", "--dev", "/dev", "--proc", "/proc", - "--ro-bind", "/usr", "/usr", - "--ro-bind", "/bin", "/bin", - "--ro-bind", "/lib", "/lib", - "--ro-bind", "/lib64", "/lib64", - "/usr/bin/true", + *bind_args, + true_binary, ],tests/harness/test_package/__init__.py (1)
29-32: Test initialization marker usingsys.modulesis unconventional.Using
sys.modules["_test_ext_initialized"] = Trueas a test marker works but could be cleaner with a module-level variable. However, for test code this is acceptable.tests/integration_v2/conftest.py (1)
6-13: Ensure cleanup runs even if setup fails.
host.setup()is outside thetry, so a setup exception skipshost.cleanup(). Move setup inside thetryto guarantee cleanup.🛠️ Proposed fix
`@pytest.fixture` async def reference_host(): """Provides a ReferenceHost instance.""" host = ReferenceHost() - host.setup() try: + host.setup() yield host finally: await host.cleanup()tests/integration_v2/debug_rpc.py (1)
6-8: Avoid module‑level logging configuration.Configuring logging at import time mutates global state for the entire test session. Move it into the test (or a fixture) so it only affects this test.
♻️ Suggested change
-# Configure logging to see what's happening -logging.basicConfig(level=logging.DEBUG, stream=sys.stderr) -logging.getLogger("pyisolate").setLevel(logging.DEBUG) - `@pytest.mark.asyncio` async def test_debug_ping(reference_host): print("\n--- Starting Debug Ping ---") + logging.basicConfig(level=logging.DEBUG, stream=sys.stderr) + logging.getLogger("pyisolate").setLevel(logging.DEBUG) ext = reference_host.load_test_extension("debug_ping", isolated=True)tests/test_serialization_registry.py (1)
1-12: Isolate singleton registry state across tests.
SerializerRegistryis global; without a fixture, handlers can leak between tests and cause order dependence. Add an autouse fixture that clears before/after each test.🧹 Proposed fix
+import pytest from pyisolate._internal.serialization_registry import SerializerRegistry +@pytest.fixture(autouse=True) +def _clear_registry(): + registry = SerializerRegistry.get_instance() + registry.clear() + yield + registry.clear() + def test_singleton_identity(): r1 = SerializerRegistry.get_instance() r2 = SerializerRegistry.get_instance() assert r1 is r2tests/test_config_validation.py (1)
35-45: Add required ExtensionConfig fields (sandbox, sandbox_mode, env).
ExtensionConfigis a totalTypedDictand includessandbox,sandbox_mode, andenv. Omitting them will fail type checks and diverges from the real config shape. Add these keys to every config in this file.🛠️ Example fix (apply to all ExtensionConfig instances)
-from pyisolate import ExtensionConfig, ExtensionManagerConfig +from pyisolate import ExtensionConfig, ExtensionManagerConfig, SandboxMode @@ config: ExtensionConfig = { "name": "", # Invalid but TypedDict doesn't enforce "module_path": "/path/to/ext", "isolated": True, "dependencies": [], "apis": [], "share_torch": False, "share_cuda_ipc": False, + "sandbox": {}, + "sandbox_mode": SandboxMode.REQUIRED, + "env": {}, }tests/test_serialization_contract.py (1)
210-225: Prefer a runtime protocol check over manual attribute checks.Since the protocol is runtime-checkable, a single
isinstanceassertion is cleaner and more idiomatic.Please verify this aligns with your typing/Protocol usage expectations.♻️ Proposed simplification
+from pyisolate.interfaces import SerializerRegistryProtocol + def test_registry_matches_protocol(self): """SerializerRegistry implements SerializerRegistryProtocol.""" - registry = SerializerRegistry.get_instance() - - # Check protocol methods exist - assert hasattr(registry, "register") - assert hasattr(registry, "get_serializer") - assert hasattr(registry, "get_deserializer") - assert hasattr(registry, "has_handler") - - # Check methods are callable - assert callable(registry.register) - assert callable(registry.get_serializer) - assert callable(registry.get_deserializer) - assert callable(registry.has_handler) + assert isinstance(registry, SerializerRegistryProtocol)tests/test_host_integration.py (3)
4-14: Remove hard-coded/tmp/ComfyUIdefault.Make
preferred_rootinjectable so tests can pass a temp path instead of a fixed location.🔧 Suggested change
class FakeAdapter: identifier = "fake" - def __init__(self, preferred_root="/tmp/ComfyUI"): + def __init__(self, preferred_root: str): self.preferred_root = preferred_root - def get_path_config(self, module_path): + def get_path_config(self, module_path: str) -> dict[str, object]: return { "preferred_root": self.preferred_root, "additional_paths": [f"{self.preferred_root}/custom_nodes"], }
29-40: Usetmp_pathinstead of hard-coded/tmppaths in tests.This remains a blocking item per PR objectives.
🔧 Proposed fix
-def test_build_extension_snapshot_includes_adapter(monkeypatch): +def test_build_extension_snapshot_includes_adapter(monkeypatch, tmp_path): from pyisolate._internal.adapter_registry import AdapterRegistry - monkeypatch.setattr(AdapterRegistry, "get", lambda: FakeAdapter()) + fake_root = tmp_path / "ComfyUI" + fake_root.mkdir() + monkeypatch.setattr(AdapterRegistry, "get", lambda: FakeAdapter(str(fake_root))) - snapshot = host.build_extension_snapshot("/tmp/ComfyUI/custom_nodes/demo") + module_path = fake_root / "custom_nodes" / "demo" + module_path.mkdir(parents=True) + snapshot = host.build_extension_snapshot(str(module_path)) assert "sys_path" in snapshot assert snapshot["adapter_name"] == "fake" assert snapshot["preferred_root"].endswith("ComfyUI") assert snapshot.get("additional_paths") - assert snapshot.get("context_data", {}).get("module_path") == "/tmp/ComfyUI/custom_nodes/demo" + assert snapshot.get("context_data", {}).get("module_path") == str(module_path)
43-48: Replace hard-coded/tmp/nowherewithtmp_path.Keeps tests isolated and cross-platform.
🔧 Proposed fix
-def test_build_extension_snapshot_no_adapter(monkeypatch): +def test_build_extension_snapshot_no_adapter(monkeypatch, tmp_path): from pyisolate._internal.adapter_registry import AdapterRegistry monkeypatch.setattr(AdapterRegistry, "get", lambda: None) - snapshot = host.build_extension_snapshot("/tmp/nowhere") + snapshot = host.build_extension_snapshot(str(tmp_path / "nowhere"))tests/integration_v2/test_isolation.py (3)
12-45: Sensitive file is created but never exercised.Either test access to
sensitive_pathvia the proxy or remove the setup.🔧 Example: assert access is denied
# 3. Attempt to read/write sensitive file from child + try: + await proxy.read_file(sensitive_path) + read_succeeded = True + except Exception: + read_succeeded = False + + assert not read_succeeded, "Child should NOT be able to read host temp files"
22-38: Remove debug/thinking comments from the test.These should be replaced with a short intent comment or removed entirely.
Also applies to: 47-51
39-43: Avoid catching bareExceptionin filesystem isolation tests.Catch only expected filesystem errors so unrelated failures surface.
Please verify the exact exception types raised by `proxy.write_file` to keep this targeted.🔧 Suggested update
- except Exception: + except (PermissionError, OSError): write_succeeded = False- except Exception: + except (PermissionError, OSError): write_success = FalseAlso applies to: 66-70
tests/test_path_helpers_contract.py (2)
54-65: Simplify host-path membership assertion.Current
any(normpath in normpath)is harder to read and can yield false positives.♻️ Proposed simplification
- for path in host: - # Paths may be normalized - assert any(os.path.normpath(path) in os.path.normpath(r) for r in result) + result_normalized = {os.path.normpath(r) for r in result} + for path in host: + assert os.path.normpath(path) in result_normalized, f"{path} not in result"
67-78: Assert extra paths are actually present.Length-based check can pass even if extra paths are missing.
🔧 Proposed fix
- # Extra paths should appear somewhere - assert len(result) >= len(host) + result_normalized = {os.path.normpath(r) for r in result} + for path in extra: + assert os.path.normpath(path) in result_normalized, f"Extra path {path} not in result"tests/test_bootstrap_additional.py (1)
9-20: Ensuresys.pathis restored on failure.Wrap mutation/assertions in
try/finallyso cleanup always runs.🛡️ Safer cleanup
(tmp_path / "extra").mkdir(parents=True, exist_ok=True) - bootstrap._apply_sys_path(snapshot) - assert sys.path[0] == str(tmp_path) - assert sys.path[1] == str(tmp_path / "extra") - sys.path[:] = original + try: + bootstrap._apply_sys_path(snapshot) + assert sys.path[0] == str(tmp_path) + assert sys.path[1] == str(tmp_path / "extra") + finally: + sys.path[:] = originaltests/integration_v2/test_lifecycle.py (2)
6-10: Fix typo in docstring.
"actally" → "actually".✅ Proposed fix
- 1. Extension can actally accept a connection. + 1. Extension can actually accept a connection.
25-41: Test name contradicts behavior.
The test name/documentation implies non-isolated mode, but it runs withisolated=True.✅ Proposed fix (rename to match behavior)
-async def test_non_isolated_lifecycle(reference_host): +async def test_isolated_without_torch_sharing(reference_host): """ - Verifies standard mode (host-loaded) works with same API. + Verifies isolated mode works without torch sharing. """ - # Note: ReferenceHost.load_test_extension creates an Extension object which - # uses pyisolate's Extension class. For non-isolated, we need to ensure local - # execution path works if intended, BUT pyisolate's Extension class primarily - # facilitates the isolated path. - # If we pass isolated=False, we might need to check if ReferenceHost/Extension - # handles that logic (using pyisolate.host.Extension logic). - - # In pyisolate.host.Extension usually assumes launching via _initialize_process. - # If standard mode is just loading mocking, we might not test it here. - # But let's test isolated=True with share_torch=False - ext = reference_host.load_test_extension("no_torch_share", isolated=True, share_torch=False)tests/test_host_public.py (1)
47-55: Avoid hard-coded module paths in tests.
Use thetmp_pathfixture to build a portable path.✅ Proposed fix
def base_config(tmp_path): return { "name": "demo", - "module_path": "/tmp/mod.py", + "module_path": str(tmp_path / "mod.py"), "dependencies": [], "share_torch": True, "share_cuda_ipc": False, "apis": [], "venv_root_path": str(tmp_path), }pyisolate/_internal/socket_utils.py (1)
34-43: Harden permissions when the directory already exists.
mkdir(..., mode=0o700)won’t fix permissive pre-existing dirs.🔒 Proposed fix
def ensure_ipc_socket_dir() -> Path: """Create and return the IPC socket directory with appropriate permissions.""" socket_dir = get_ipc_socket_dir() if os.name == "nt": # Windows: mkdir without mode (permissions handled by OS) socket_dir.mkdir(parents=True, exist_ok=True) else: # Linux/Unix: Secure permissions socket_dir.mkdir(parents=True, exist_ok=True, mode=0o700) + current_mode = socket_dir.stat().st_mode & 0o777 + if current_mode != 0o700: + socket_dir.chmod(0o700) return socket_dirtests/path_unification/test_path_helpers.py (2)
43-45: Specify UTF‑8 when reading snapshot JSON.The writer uses UTF‑8, so the test should read with the same encoding to avoid platform‑default differences.
Proposed fix
- with open(output_path) as f: + with open(output_path, encoding="utf-8") as f: loaded = json.load(f)
96-106: Make COMFYUI_ROOT deterministic in tests.Using
Path.home()/ComfyUImakes tests environment‑dependent and violates the “no hard‑coded paths” objective. Prefer a tmp directory (viatmp_path/fixture) and setCOMFYUI_ROOTexplicitly so the path exists and is stable across CI/dev.Example pattern (apply across the affected tests)
+import pytest + +@pytest.fixture +def comfy_root(tmp_path, monkeypatch): + root = tmp_path / "ComfyUI" + root.mkdir() + monkeypatch.setenv("COMFYUI_ROOT", str(root)) + return str(root) @@ - def test_inserts_comfy_root_first_when_missing(self): + def test_inserts_comfy_root_first_when_missing(self, comfy_root): host = ["/host/lib1", "/host/lib2"] extras = ["/venv/lib"] - comfy_root = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") + comfy_root = comfy_rootAlso applies to: 107-149, 218-239
tests/test_host_internal_ext.py (3)
64-80: Use tmp_path for DummyExtension.module_path.
/tmp/mod.pyis platform-specific and conflicts with the “no hard‑coded paths” objective.Proposed fix
- super().__init__( - module_path="/tmp/mod.py", + super().__init__( + module_path=str(tmp_path / "mod.py"), extension_type=SimpleNamespace, config=base_config, venv_root_path=str(tmp_path), )
88-90: Don’t short‑circuit _install_dependencies in tests.The no‑op override prevents
test_install_dependencies_*from exercising real logic.Proposed fix
def _install_dependencies(self): - return + return super()._install_dependencies()
219-224: Derive venv layout from platform + Python version.Hard‑coding
python3.12andbin/pythonbreaks on other versions and Windows.Proposed fix (apply to each venv layout block)
- site_packages = venv_path / "lib" / "python3.12" / "site-packages" + pyver = f"python{sys.version_info.major}.{sys.version_info.minor}" + lib_dir = "Lib" if sys.platform == "win32" else "lib" + site_packages = venv_path / lib_dir / pyver / "site-packages" @@ - python_exe = venv_path / "bin" / "python" + bin_dir = "Scripts" if sys.platform == "win32" else "bin" + python_exe = venv_path / bin_dir / ("python.exe" if sys.platform == "win32" else "python")Also applies to: 241-244, 260-263
tests/test_shared_additional.py (1)
56-86: Reset the event loop after closing.A closed loop left as the current loop can leak into later tests.
Proposed fix
- loop.close() + loop.close() + asyncio.set_event_loop(None)Also applies to: 88-118
pyisolate/_internal/model_serialization.py (1)
23-62: Compute CUDA IPC flag at call time.Reading
PYISOLATE_ENABLE_CUDA_IPCat import time can silently ignore later env changes.Proposed fix
-_cuda_ipc_enabled = sys.platform == "linux" and os.environ.get("PYISOLATE_ENABLE_CUDA_IPC") == "1" +def _cuda_ipc_enabled() -> bool: + return sys.platform == "linux" and os.environ.get("PYISOLATE_ENABLE_CUDA_IPC") == "1" @@ - if _cuda_ipc_enabled: + if _cuda_ipc_enabled(): return datapyisolate/_internal/serialization_registry.py (2)
26-31: Make get_instance thread-safe (or document single-threaded assumption).Thread-safe option
+import threading @@ class SerializerRegistry: ... _instance: SerializerRegistry | None = None + _lock = threading.Lock() @@ def get_instance(cls) -> SerializerRegistry: """Return the singleton instance, creating it if necessary.""" if cls._instance is None: - cls._instance = cls() + with cls._lock: + if cls._instance is None: + cls._instance = cls() return cls._instance
40-45: Clear stale deserializer when re-registering.Proposed fix
if type_name in self._serializers: logger.debug("Overwriting existing serializer for %s", type_name) + self._deserializers.pop(type_name, None)tests/test_extension_lifecycle.py (2)
116-117: Add explicit UTF‑8 encoding when reading YAML.Prevents platform‑dependent decoding issues.
🔧 Proposed fix
- with open(manifest_path) as f: + with open(manifest_path, encoding="utf-8") as f: manifest = yaml.safe_load(f) @@ - with open(manifest_path) as f: + with open(manifest_path, encoding="utf-8") as f: manifest = yaml.safe_load(f)Also applies to: 133-134
171-202: Contract-only lifecycle tests should be backed by real assertions.These read as documentation but don’t validate behavior. Consider integration-style tests that exercise Extension/ExtensionManager lifecycle.
pyisolate/interfaces.py (1)
63-64: Clarify that the hook is required but may be a no‑op.The Protocol requires implementation; “optional” is misleading.
📝 Proposed fix
- """Optional post-registration hook for API-specific setup.""" + """Post-registration hook for API-specific setup (can be a no-op)."""pyisolate/_internal/client.py (1)
110-118: Normalize module_path before deriving module name.
os.path.basename()returns an empty string for trailing slashes; normalize first and guard against empty names.🧭 Proposed fix
- sys_module_name = os.path.basename(module_path).replace("-", "_").replace(".", "_") - module_spec = importlib.util.spec_from_file_location( - sys_module_name, os.path.join(module_path, "__init__.py") - ) + normalized_module_path = os.path.normpath(module_path) + sys_module_name = ( + os.path.basename(normalized_module_path) + .replace("-", "_") + .replace(".", "_") + ) + if not sys_module_name: + raise ValueError(f"Unable to derive module name from path: {module_path}") + module_spec = importlib.util.spec_from_file_location( + sys_module_name, os.path.join(normalized_module_path, "__init__.py") + )pyisolate/_internal/uds_client.py (1)
145-169: Guardtorchimport and sharing-strategy setup.The unconditional
import torchwill crash in environments without torch even whenshare_torchis false. Guard the import and only fail whenshare_torchis enabled; reuse the same import forinference_mode().🔧 Proposed fix
- # Ensure file_system strategy for CPU tensors - import torch - - torch.multiprocessing.set_sharing_strategy("file_system") + # Ensure file_system strategy for CPU tensors + torch_mod = None + try: + import torch as _torch + _torch.multiprocessing.set_sharing_strategy("file_system") + torch_mod = _torch + except ImportError: + if config.get("share_torch", False): + raise + logger.debug("torch not available; skipping sharing strategy setup") @@ - if config.get("share_torch", False): - import torch - - context = cast(ContextManager[Any], torch.inference_mode()) + if config.get("share_torch", False): + if torch_mod is None: + raise RuntimeError("share_torch=true but torch is not available") + context = cast(ContextManager[Any], torch_mod.inference_mode())pyisolate/_internal/rpc_serialization.py (1)
203-205: ParsePYISOLATE_DEBUG_RPCexplicitly.
bool(os.environ.get(...))treats any non-empty string as true (e.g.,"0","false"), enabling debug logging unintentionally. Parse explicit truthy values instead.🔧 Proposed fix
-debug_all_messages = bool(os.environ.get("PYISOLATE_DEBUG_RPC")) +debug_all_messages = os.environ.get("PYISOLATE_DEBUG_RPC", "").strip().lower() in { + "1", + "true", + "t", + "yes", + "y", + "on", +}tests/test_adapter_contract.py (1)
71-104: IsolateSerializerRegistryper test via fixture.Manual
registry.clear()calls can still race under parallel runs. Prefer an autouse fixture that clears before/after each test and remove per-test clears.♻️ Suggested refactor
+import pytest @@ +@pytest.fixture(autouse=True) +def clear_registry(): + registry = SerializerRegistry.get_instance() + registry.clear() + yield + registry.clear() @@ - registry = SerializerRegistry.get_instance() - registry.clear() # Start fresh + registry = SerializerRegistry.get_instance() @@ - registry = SerializerRegistry.get_instance() - registry.clear() + registry = SerializerRegistry.get_instance() @@ - registry = SerializerRegistry.get_instance() - registry.clear() + registry = SerializerRegistry.get_instance()tests/harness/host.py (1)
79-83: Restore TMPDIR and surface cleanup failures.TMPDIR is overwritten and never restored; cleanup errors are swallowed. This can leak env state across tests and hide failures. Save the original TMPDIR, restore it during cleanup (even on errors), and raise/log cleanup failures.
🔧 Proposed fix
- # Force host process (and children via inherit) to use this TMPDIR - os.environ["TMPDIR"] = str(self.shared_tmp) + # Force host process (and children via inherit) to use this TMPDIR + self._orig_tmpdir = os.environ.get("TMPDIR") + os.environ["TMPDIR"] = str(self.shared_tmp) @@ - if cleanup_errors: - pass + # Restore TMPDIR to avoid leaking env state across tests + if self._orig_tmpdir is None: + os.environ.pop("TMPDIR", None) + else: + os.environ["TMPDIR"] = self._orig_tmpdir + + if cleanup_errors: + raise RuntimeError("; ".join(cleanup_errors))Also applies to: 172-193
pyisolate/_internal/rpc_protocol.py (4)
93-99: Constructor requirements should be validated before instantiation.Still calling
cls.__init__with no args; this raises a rawTypeErrorfor classes that require parameters. (Same as previous review.)
134-177: asyncio.get_event_loop() can raise in non‑main threads.
AsyncRPC.__init__andupdate_event_loop()still useget_event_loop()without a fallback. (Same as previous review.)
265-280: Shutdown still doesn’t signal the send thread or guard stop() idempotency.
shutdown()never enqueues theNonesentinel;_send_threadcan block forever, andstop()can raise if called twice. (Same as previous review.)Also applies to: 542-548
641-646: Avoid assert for runtime guard in inject_instance.
assertis stripped with-O; use a real exception. (Same as previous review.)
| fail-fast: false | ||
| matrix: | ||
| python-version: ['3.9', '3.11'] | ||
| python-version: ['3.10', '3.11'] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if Python version requirements are documented in pyproject.toml or setup.py
echo "=== Checking pyproject.toml for python-requires ==="
if fd -t f "pyproject.toml" --max-one-result; then
echo "Found pyproject.toml, checking for python requirement..."
fd -t f "pyproject.toml" --exec cat {} \; | grep -E "(requires-python|python)" || echo "No explicit python requirement found in pyproject.toml"
fi
echo ""
echo "=== Checking setup.py for python_requires ==="
if fd -t f "setup.py" --max-one-result; then
echo "Found setup.py, checking for python_requires..."
fd -t f "setup.py" --exec cat {} \; | grep -i "python_requires" || echo "No python_requires found in setup.py"
fi
echo ""
echo "=== Checking setup.cfg for python version constraints ==="
if fd -t f "setup.cfg" --max-one-result; then
echo "Found setup.cfg, checking for python version constraints..."
fd -t f "setup.cfg" --exec cat {} \; | grep -i "python" || echo "No python constraints found"
fi
echo ""
echo "=== Checking README for Python version mentions ==="
if fd -t f "README*" --max-one-result; then
fd -t f "README*" --exec grep -i "python 3\." {} \; || echo "No Python version mentioned in README"
fi
echo ""
echo "=== Checking workflow file to confirm the change ==="
cat -n .github/workflows/pytorch.yml | sed -n '15,20p'Repository: Comfy-Org/pyisolate
Length of output: 1474
🏁 Script executed:
#!/bin/bash
# Search for any Python version requirements across the repo
echo "=== Searching for Python version mentions in all text files ==="
rg -i "python.*3\.[0-9]" --type py --type toml --type yaml --type md --type txt -B 2 -A 2 || echo "No Python version mentions found"
echo ""
echo "=== Checking if code uses Python 3.10+ specific features ==="
echo "Checking for match statements (Python 3.10+ feature)..."
rg "^\s*match\s+" --type py -l || echo "No match statements found"
echo ""
echo "=== Searching for any .python-version or pyenv files ==="
fd -t f ".python-version" -o "pyenv" --hidden || echo "No python-version or pyenv files"
echo ""
echo "=== Checking git history for the workflow change ==="
git log --oneline --all -- ".github/workflows/pytorch.yml" | head -20 || echo "Cannot access git history"
echo ""
echo "=== Checking file structure for setup/config files ==="
ls -la | grep -E "setup|pyproject|requirements" || echo "No setup/config files found in root"Repository: Comfy-Org/pyisolate
Length of output: 4165
Update documentation to reflect Python 3.10+ minimum requirement.
The Python version matrix change from ['3.9', '3.11'] to ['3.10', '3.11'] aligns with the project's documented minimum Python version of >=3.10 in pyproject.toml. However, the following documentation files still incorrectly claim Python 3.9+ support and need to be updated:
README.md: Currently states "Python 3.9+"README_COMFYUI.md: Currently states "Python 3.9+"BENCHMARK_INSTRUCTIONS.md: Currently states "Python 3.9 or higher"
🤖 Prompt for AI Agents
In @.github/workflows/pytorch.yml at line 17, Update the documentation strings
that still claim Python 3.9 support to reflect the new minimum of Python 3.10:
edit README.md and README_COMFYUI.md to change occurrences of "Python 3.9+" to
"Python 3.10+" and edit BENCHMARK_INSTRUCTIONS.md to change "Python 3.9 or
higher" to "Python 3.10 or higher" (search for those exact phrases to locate the
lines).
| - repo: https://github.com/astral-sh/ruff-pre-commit | ||
| rev: v0.11.8 | ||
| rev: v0.14.0 | ||
| hooks: | ||
| - id: ruff | ||
| args: [--fix] | ||
| args: [--fix, --unsafe-fixes] | ||
| - id: ruff-format |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
Ruff v0.14.0 release notes breaking changes
💡 Result:
Ruff 0.14.0 (released 2025-10-07) — breaking changes:
- Updated the default and "latest" Python versions to 3.14. [1][2]
Sources:
[1] Ruff changelog (0.14.0).
[2] Ruff GitHub release notes (0.14.0).
🌐 Web query:
Ruff --unsafe-fixes pre-commit hook implications semantic changes
💡 Result:
Short answer: enabling --unsafe-fixes in a pre-commit hook can change program semantics or remove comments and therefore can break behavior — treat it as non-automatic and require human review. See official docs: Ruff classifies fixes as safe vs unsafe; unsafe fixes "could lead to a change in runtime behavior, the removal of comments, or both." [1][2].
Key implications and practical recommendations:
- Risk: unsafe fixes may alter runtime behavior (e.g., rewrite constructs to faster but semantically-different forms) or remove information (comments) that matters for runtime/maintenance. Test suites may miss some regressions. [1][2][5]
- Visibility: by default Ruff hides/does not apply unsafe fixes; you must opt into them with --unsafe-fixes or unsafe-fixes = true. That design signals extra caution. [2][4]
- Pre-commit concerns: running --unsafe-fixes automatically on every commit can silently introduce behavioral changes into commits without explicit review — not recommended for unreviewed developer hooks. [2][4]
- Safer alternatives:
- Use only safe automatic fixes in pre-commit (omit --unsafe-fixes) and require manual/CI application of unsafe fixes. [1][7]
- If you must allow unsafe fixes in a hook, restrict them to a narrow set of rules (promote specific rules via lint.extend-safe-fixes or configure fixable/unfixable rules) so the hook only applies well-understood transforms. [1]
- Run fixes but require a staged/interactive review step (e.g., fail the hook if any unsafe-fix is available and surface the candidate changes for the developer to apply/review). [2][6]
- Apply unsafe fixes behind CI jobs or feature-branch maintenance runs (with tests) rather than on every developer commit. [2][3]
- Monitoring: enable Ruff's JSON output to see the applicability/safety metadata for each suggested fix, and keep a changelog or PR checklist when applying unsafe fixes. [1]
If you want, I can:
- Suggest a concrete pre-commit config snippet that enables only safe fixes and surfaces unsafe-fix hints, or
- Draft a policy for when and how to accept unsafe fixes (CI-only, rule whitelist, review checklist).
Sources:
- Ruff docs — Fixes / Fix safety. [astral.sh docs]. [1]
- Ruff release notes / blog explaining unsafe fixes are hidden by default and require --unsafe-fixes. [3][4]
- Ruff FAQ ("Ruff tried to fix something — but it broke my code"). [6]
- Settings docs (unsafe-fixes, fixable/unfixable, extend-safe-fixes). [7]
- Examples and explanation of unsafe transform (RUF015 example). [5]
- Calmcode explainer showing the CLI prompt and hidden unsafe fixes. [8]
Confirm Ruff v0.14.0 upgrade + unsafe-fix policy.
Auto-applying --unsafe-fixes can introduce semantic changes and remove comments; Ruff's documentation explicitly warns against this and hides unsafe fixes by default for good reason. Please confirm this is intentional for all commits. Ruff v0.14.0 is compatible with your hooks (main change: default Python version updated to 3.14).
If you want unsafe fixes only on demand, consider gating them behind a manual stage:
🔧 Gate unsafe fixes for manual review
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.0
hooks:
- id: ruff
- args: [--fix, --unsafe-fixes]
+ args: [--fix]
+ - id: ruff
+ name: ruff-unsafe-fixes
+ args: [--fix, --unsafe-fixes]
+ stages: [manual]
- id: ruff-format📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - repo: https://github.com/astral-sh/ruff-pre-commit | |
| rev: v0.11.8 | |
| rev: v0.14.0 | |
| hooks: | |
| - id: ruff | |
| args: [--fix] | |
| args: [--fix, --unsafe-fixes] | |
| - id: ruff-format | |
| - repo: https://github.com/astral-sh/ruff-pre-commit | |
| rev: v0.14.0 | |
| hooks: | |
| - id: ruff | |
| args: [--fix] | |
| - id: ruff | |
| name: ruff-unsafe-fixes | |
| args: [--fix, --unsafe-fixes] | |
| stages: [manual] | |
| - id: ruff-format |
🤖 Prompt for AI Agents
In @.pre-commit-config.yaml around lines 14 - 19, The pre-commit hook for Ruff
(repo https://github.com/astral-sh/ruff-pre-commit, rev v0.14.0) currently
passes the --unsafe-fixes flag on the hook with id: ruff; confirm whether you
intend to apply unsafe fixes on every commit — if not, remove --unsafe-fixes
from the args for the ruff hook (leaving ruff-format as-is) or else move the
unsafe option into a separate, opt-in hook (e.g., a new ruff-unsafe entry) or CI
job that runs only on demand and document its usage so unsafe fixes are not
auto-applied on all commits.
| Basic Usage: | ||
| >>> import pyisolate | ||
| >>> import asyncio | ||
| >>> | ||
| >>> async def main(): | ||
| ... config = pyisolate.ExtensionManagerConfig( | ||
| ... venv_root_path="./venvs" | ||
| ... ) | ||
| ... config = pyisolate.ExtensionManagerConfig(venv_root_path="./venvs") | ||
| ... manager = pyisolate.ExtensionManager(pyisolate.ExtensionBase, config) | ||
| ... | ||
| ... extension = await manager.load_extension( | ||
| ... pyisolate.ExtensionConfig( | ||
| ... name="my_extension", | ||
| ... module_path="./extensions/my_extension", | ||
| ... isolated=True, | ||
| ... dependencies=["numpy>=2.0.0"] | ||
| ... dependencies=["numpy>=2.0.0"], | ||
| ... ) |
There was a problem hiding this comment.
Docstring example uses TypedDicts like constructors and awaits a sync API.
ExtensionManagerConfig/ExtensionConfig are TypedDicts, and ExtensionManager.load_extension is sync; the example should use dict literals and drop await.
📝 Proposed docstring fix
- ... config = pyisolate.ExtensionManagerConfig(venv_root_path="./venvs")
+ ... config: pyisolate.ExtensionManagerConfig = {"venv_root_path": "./venvs"}
... manager = pyisolate.ExtensionManager(pyisolate.ExtensionBase, config)
- ... extension = await manager.load_extension(
- ... pyisolate.ExtensionConfig(
- ... name="my_extension",
- ... module_path="./extensions/my_extension",
- ... isolated=True,
- ... dependencies=["numpy>=2.0.0"],
- ... )
- ... )
+ ... extension = manager.load_extension(
+ ... {
+ ... "name": "my_extension",
+ ... "module_path": "./extensions/my_extension",
+ ... "isolated": True,
+ ... "dependencies": ["numpy>=2.0.0"],
+ ... }
+ ... )🤖 Prompt for AI Agents
In `@pyisolate/__init__.py` around lines 16 - 28, The docstring example
incorrectly treats ExtensionManagerConfig and ExtensionConfig as constructors
and calls the sync API with await; change the example to use plain dict literals
for ExtensionManagerConfig and ExtensionConfig and call
ExtensionManager.load_extension without await (i.e., call
manager.load_extension(...) synchronously). Update the shown instantiation of
manager and the load_extension usage to reflect using dicts for the configs and
a normal function call to load_extension.
| for api in config["apis"]: | ||
| api.use_remote(rpc) | ||
| if _adapter: | ||
| api_instance = cast(ProxiedSingleton, getattr(api, "instance", api)) | ||
| _adapter.handle_api_registration(api_instance, rpc) | ||
|
|
There was a problem hiding this comment.
Adapter registration likely receives a class, not a ProxiedSingleton instance.
config["apis"] is typed as list[type[ProxiedSingleton]], and getattr(api, "instance", api) won’t yield the injected remote instance. This likely breaks adapter hooks that expect an instance.
🛠️ Proposed fix
- if _adapter:
- api_instance = cast(ProxiedSingleton, getattr(api, "instance", api))
- _adapter.handle_api_registration(api_instance, rpc)
+ if _adapter:
+ api_instance = api.get_instance() if isinstance(api, type) else api
+ _adapter.handle_api_registration(cast(ProxiedSingleton, api_instance), rpc)🤖 Prompt for AI Agents
In `@pyisolate/_internal/client.py` around lines 104 - 109, The adapter is being
handed the class instead of the injected remote instance because getattr(api,
"instance", api) falls back to the class; after calling api.use_remote(rpc)
fetch the injected instance explicitly and pass that to the adapter. Replace
getattr(api, "instance", api) with something like instance = getattr(api,
"instance") (or getattr(api, "instance", None") and error if None), cast it to
ProxiedSingleton, and call _adapter.handle_api_registration(instance, rpc) so
_adapter receives the actual proxied instance rather than the class; keep
references to config["apis"], api.use_remote, and
_adapter.handle_api_registration when making the change.
| safe_deps: list[str] = [] | ||
| for dep in config["dependencies"]: | ||
| validate_dependency(dep) | ||
| safe_deps.append(dep) |
There was a problem hiding this comment.
Skip empty dependency entries before invoking uv/pip.
validate_dependency() returns early for empty strings, but the empty value is still appended and may cause install failures. Filter/strip before appending.
🔧 Proposed fix
- for dep in config["dependencies"]:
- validate_dependency(dep)
- safe_deps.append(dep)
+ for dep in config["dependencies"]:
+ if not dep or not dep.strip():
+ continue
+ dep = dep.strip()
+ validate_dependency(dep)
+ safe_deps.append(dep)🤖 Prompt for AI Agents
In `@pyisolate/_internal/environment.py` around lines 286 - 289, The loop building
safe_deps currently appends empty or whitespace-only entries from
config["dependencies"] even though validate_dependency returns early for empty
strings; update the loop in environment.py that iterates over
config["dependencies"] to strip each dep, skip if the stripped value is empty
(or falsy), then call validate_dependency and append the cleaned dep to
safe_deps (referring to the variables safe_deps, dep, validate_dependency, and
config["dependencies"] to locate the change).
| Expected output: | ||
| ``` | ||
| Extension1 | ✓ PASSED | Data processing with pandas/numpy 1.x | ||
| Extension2 | ✓ PASSED | Array processing with numpy 2.x | ||
| Extension3 | ✓ PASSED | HTML parsing with BeautifulSoup/scipy | ||
| ``` |
There was a problem hiding this comment.
Add language identifier to fenced code block.
The expected output code block at line 65 is missing a language identifier. Use a plain text identifier for non-code output.
📝 Suggested fix
- ```
+ ```text
Extension1 | ✓ PASSED | Data processing with pandas/numpy 1.x📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Expected output: | |
| ``` | |
| Extension1 | ✓ PASSED | Data processing with pandas/numpy 1.x | |
| Extension2 | ✓ PASSED | Array processing with numpy 2.x | |
| Extension3 | ✓ PASSED | HTML parsing with BeautifulSoup/scipy | |
| ``` | |
| Expected output: |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
65-65: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
65-65: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
69-69: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@README.md` around lines 64 - 69, The fenced code block under "Expected
output" is missing a language identifier; update the triple backticks that
surround the example table so they read ```text instead of ``` to mark it as
plain text (i.e., change the opening fence for the block containing "Extension1
| ✓ PASSED ..." to ```text).
| COMFYUI_ROOT = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") | ||
| if COMFYUI_ROOT not in sys.path: | ||
| sys.path.insert(0, COMFYUI_ROOT) | ||
|
|
||
| # Set environment variable so child processes know ComfyUI location | ||
| os.environ.setdefault("COMFYUI_ROOT", COMFYUI_ROOT) |
There was a problem hiding this comment.
Hard-coded fallback path violates PR objective.
Per the PR objectives, removing hard-coded paths is a blocking item. The fallback Path.home() / "ComfyUI" assumes a specific directory structure that may not exist in CI environments or for contributors with different setups. Consider failing explicitly if COMFYUI_ROOT is not set, or documenting this assumption clearly.
💡 Suggested improvement
-COMFYUI_ROOT = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI")
+COMFYUI_ROOT = os.environ.get("COMFYUI_ROOT")
+if not COMFYUI_ROOT:
+ raise EnvironmentError(
+ "COMFYUI_ROOT environment variable must be set for tests. "
+ "Set it to your ComfyUI installation path."
+ )Alternatively, if a default is desired for local development convenience, ensure CI workflows explicitly set the variable.
🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 17 - 22, The current fallback COMFYUI_ROOT =
os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") uses a hard-coded
default; remove that fallback and make COMFYUI_ROOT strictly come from the
environment (os.environ.get("COMFYUI_ROOT")), and if it's missing raise a clear
error (RuntimeError or sys.exit with a descriptive message) instead of
defaulting; keep the sys.path insertion (sys.path.insert(0, COMFYUI_ROOT)) and
the environment export (os.environ.setdefault("COMFYUI_ROOT", COMFYUI_ROOT))
only after the presence check so child processes still see the variable.
| # Create adapter | ||
| adapter = MockHostAdapter(root_path="/tmp/myapp") | ||
|
|
||
| # Get path configuration for an extension | ||
| config = adapter.get_path_config("/tmp/myapp/extensions/myext/__init__.py") | ||
| # Returns: {"preferred_root": "/tmp/myapp", "additional_paths": [...]} |
There was a problem hiding this comment.
Avoid hard‑coded /tmp defaults in test fixtures and docs.
These paths are not portable (e.g., Windows) and conflict with the PR objective to remove hard-coded paths from tests/examples. Prefer an explicit root_path or use an OS-agnostic temp path supplied by the test harness; also update doc examples to /path/to/....
🔧 Proposed adjustment (require explicit root_path)
- def __init__(self, root_path: str = "/tmp/testhost"):
+ def __init__(self, root_path: str):
self._root = root_path
self._extensions_dir = f"{root_path}/extensions"Also applies to: 138-147
🤖 Prompt for AI Agents
In `@tests/fixtures/test_adapter.py` around lines 21 - 26, Tests and examples use
hard-coded "/tmp" paths; update the fixture to require an explicit,
platform-agnostic root_path (or accept one from the test harness) instead of
defaulting to "/tmp/myapp" and change example docs to "/path/to/...".
Specifically, modify MockHostAdapter instantiation and any calls to
get_path_config (e.g., MockHostAdapter(root_path=...),
adapter.get_path_config(...)) so tests provide a supplied root_path (or use
tempfile.gettempdir()/pathlib.Path from the harness) and update the example
return/comment to use "/path/to/..." rather than "/tmp".
| def test_config_requires_name(self): | ||
| """ExtensionConfig must have a name.""" | ||
| config: ExtensionConfig = { | ||
| "name": "test_ext", | ||
| "module_path": "/path/to/ext", | ||
| "isolated": True, | ||
| "dependencies": [], | ||
| "apis": [], | ||
| "share_torch": False, | ||
| "share_cuda_ipc": False, | ||
| } | ||
| assert config["name"] == "test_ext" | ||
|
|
||
| def test_config_requires_module_path(self): | ||
| """ExtensionConfig must have a module_path.""" | ||
| config: ExtensionConfig = { | ||
| "name": "test_ext", | ||
| "module_path": "/path/to/ext", | ||
| "isolated": True, | ||
| "dependencies": [], | ||
| "apis": [], | ||
| "share_torch": False, | ||
| "share_cuda_ipc": False, | ||
| } | ||
| assert config["module_path"] == "/path/to/ext" | ||
|
|
||
| def test_config_with_dependencies(self): | ||
| """ExtensionConfig accepts dependencies list.""" | ||
| config: ExtensionConfig = { | ||
| "name": "test_ext", | ||
| "module_path": "/path/to/ext", | ||
| "isolated": True, | ||
| "dependencies": ["numpy>=1.20", "pillow"], | ||
| "apis": [], | ||
| "share_torch": False, | ||
| "share_cuda_ipc": False, | ||
| } | ||
| assert "numpy>=1.20" in config["dependencies"] | ||
| assert "pillow" in config["dependencies"] | ||
|
|
||
| def test_config_share_torch(self): | ||
| """ExtensionConfig accepts share_torch flag.""" | ||
| config: ExtensionConfig = { | ||
| "name": "test_ext", | ||
| "module_path": "/path/to/ext", | ||
| "isolated": True, | ||
| "dependencies": [], | ||
| "apis": [], | ||
| "share_torch": True, | ||
| "share_cuda_ipc": False, | ||
| } | ||
| assert config["share_torch"] is True | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find ExtensionConfig definition
fd -t f "config.py" | head -20Repository: Comfy-Org/pyisolate
Length of output: 82
🏁 Script executed:
#!/bin/bash
# Search for ExtensionConfig in the codebase
rg -n "ExtensionConfig" --type py -A 5 -B 2 | head -100Repository: Comfy-Org/pyisolate
Length of output: 5759
🏁 Script executed:
#!/bin/bash
# Get the full ExtensionConfig TypedDict definition
rg -n "class ExtensionConfig\|TypedDict" pyisolate/ -A 20 | head -150Repository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Read the ExtensionConfig definition from config.py
cat -n pyisolate/config.py | sed -n '35,100p'Repository: Comfy-Org/pyisolate
Length of output: 1674
🏁 Script executed:
#!/bin/bash
# Get the full TypedDict definition including any total parameter
cat -n pyisolate/config.py | sed -n '39,75p'Repository: Comfy-Org/pyisolate
Length of output: 1551
🏁 Script executed:
#!/bin/bash
# Check for total parameter or NotRequired in ExtensionConfig definition
rg -n "total=" pyisolate/config.py
rg -n "NotRequired" pyisolate/config.pyRepository: Comfy-Org/pyisolate
Length of output: 110
🏁 Script executed:
#!/bin/bash
# Get broader context around ExtensionConfig, including any decorators or parameters
cat -n pyisolate/config.py | sed -n '35,75p'Repository: Comfy-Org/pyisolate
Length of output: 1674
🏁 Script executed:
#!/bin/bash
# Check if mypy is configured and run on tests
fd -t f "pyproject.toml\|mypy.ini\|setup.cfg" | xargs grep -l "mypy" 2>/dev/nullRepository: Comfy-Org/pyisolate
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check the test file to see type annotations
cat -n tests/test_extension_lifecycle.py | head -80Repository: Comfy-Org/pyisolate
Length of output: 3055
ExtensionConfig dicts omit required keys that will fail type checking.
ExtensionConfig is a total TypedDict with 10 required fields. The test configs on lines 24–76 (and other locations) omit sandbox, sandbox_mode, and env, which will cause mypy to fail. Add the required keys with appropriate defaults.
Suggested pattern (apply across this file)
+from pyisolate.config import SandboxMode
+
+def _base_config(**overrides):
+ return {
+ "name": "test_ext",
+ "module_path": "/path/to/ext",
+ "isolated": True,
+ "dependencies": [],
+ "apis": [],
+ "share_torch": False,
+ "share_cuda_ipc": False,
+ "sandbox": {},
+ "sandbox_mode": SandboxMode.REQUIRED,
+ "env": {},
+ **overrides,
+ }Also applies to: 84–96, 137–147, 157–170, 217–228, 231–241
🤖 Prompt for AI Agents
In `@tests/test_extension_lifecycle.py` around lines 24 - 76, The tests build
ExtensionConfig dicts (e.g., in test_config_requires_name,
test_config_requires_module_path, test_config_with_dependencies,
test_config_share_torch) but omit required TypedDict fields (sandbox,
sandbox_mode, env), causing type-check failures; update each ExtensionConfig
literal to include sandbox (e.g., False), sandbox_mode (e.g., "none" or
appropriate default), and env (e.g., {}), and mirror the same additions for the
other test blocks mentioned so all ExtensionConfig dicts contain all 10 required
keys.
| from pyisolate._internal import host | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Import safety helpers from their defining module.
These helpers live in pyisolate/_internal/environment.py; importing via pyisolate._internal.host couples tests to re-exports. Consider importing directly to keep module boundaries clear.
♻️ Suggested change
-from pyisolate._internal import host
+from pyisolate._internal.environment import (
+ normalize_extension_name,
+ validate_dependency,
+ validate_path_within_root,
+)- host.normalize_extension_name("")
+ normalize_extension_name("")- host.validate_dependency("-e") # should not raise
+ validate_dependency("-e") # should not raise- host.validate_path_within_root(inside, root) # should not raise
+ validate_path_within_root(inside, root) # should not raise🤖 Prompt for AI Agents
In `@tests/test_extension_safety.py` around lines 5 - 6, Replace the indirect
import "from pyisolate._internal import host" with direct imports from the
defining module "pyisolate._internal.environment": import the specific safety
helper symbols used in this test (the helpers referenced via host in
tests/test_extension_safety.py) from environment and update usages to call them
directly (remove host.<helper> indirection); this keeps the test coupled to the
helpers' real module rather than a re-export.
There was a problem hiding this comment.
Actionable comments posted: 17
🤖 Fix all issues with AI agents
In `@docs/debugging.md`:
- Around line 54-65: The markdown has fenced code blocks that are directly
adjacent to text, violating MD031; edit the affected sections in debugging.md
(around the shown snippet and similarly at the blocks noted for lines 76-81,
98-107, 113-128) and insert a blank line both immediately before and immediately
after each ``` fenced block so every fence is separated from surrounding
paragraphs; ensure all code fences in those areas follow this pattern to satisfy
MD031.
- Around line 256-259: Replace the bare URL on the list item "Check existing
issues at: https://github.com/anthropics/claude-code/issues" with an inline
Markdown link to satisfy MD034; update the line in the docs entry (the list item
containing that URL) to use link syntax such as "Check existing issues at:
[anthropics/claude-code
issues](https://github.com/anthropics/claude-code/issues)" so the raw URL is no
longer bare.
In `@docs/edge_cases.md`:
- Around line 38-42: The doc snippet shows direct modification of the internal
symbol _tensor_keeper.retention_seconds from
pyisolate._internal.tensor_serializer; update the docs to warn this is an
internal/unsupported workaround (mention the exact symbol
_tensor_keeper.retention_seconds and module
pyisolate._internal.tensor_serializer) and either add guidance that this should
not be relied upon long-term or add a note suggesting to expose a public API
(e.g., a configuration option on TensorSerializer or a top-level config) instead
of instructing users to mutate internal state directly.
- Around line 159-162: The phrasing in the docs uses "timeout" as a verb in the
line "**Behavior**: Call may fail or timeout." — change "timeout" to the verb
form "time out" so it reads "**Behavior**: Call may fail or time out."; search
the same document for other occurrences of the verb usage of "timeout" and
correct them to "time out" to keep grammar consistent.
In `@docs/platform_compatibility.md`:
- Around line 29-31: The fenced code blocks showing enum values (e.g.,
RestrictionModel.NONE) lack language specifiers; update those backtick fence
blocks to include a language tag such as text (or python) so markdown linters
render them consistently—locate the occurrences that contain
"RestrictionModel.NONE" and the similar block at the second location (lines
referenced in the review) and change their opening fences from ``` to ```text
(or ```python) accordingly.
- Around line 173-203: Remove the invalid code example that imports and calls
detect_restriction_model, RestrictionModel, and the non-public
_check_shm_availability; these APIs don’t exist (only a private
_check_shm_availability is present) so delete the whole snippet and replace it
with either a short sentence stating that sandbox/IPC detection utilities are
not yet implemented or with a correct example that uses only public, existing
APIs once available; ensure any mention of detect_restriction_model,
RestrictionModel, or _check_shm_availability is removed or replaced with
supported public API names.
In `@docs/rpc_protocol.md`:
- Around line 114-148: The ASCII flow diagrams under "Simple Call Flow" and
"Nested Call Flow (Callback)" are missing a fenced-code language, causing MD040;
update both code blocks by adding the language specifier (use "text") to the
opening triple-backticks for the diagrams in docs/rpc_protocol.md so the fences
read ```text, ensuring the diagrams are recognized as plain text.
In `@pyisolate/_internal/rpc_protocol.py`:
- Around line 623-625: The elif branch checking typed_item["kind"] == "response"
is dead because RPCPendingRequest only allows "call" or "callback"; remove that
branch and its body (the response_msg creation and self._transport.send call)
from the outbox processing loop so only valid "call"/"callback" handling
remains; ensure any references to RPCMessage/_prepare_for_rpc in that branch are
removed and run tests to confirm nothing else expects a "response" case.
In `@pyisolate/_internal/rpc_serialization.py`:
- Around line 140-141: The __iter__ method currently declares a return type of
Iterable[str] but actually returns an iterator from iter(self._data); update its
annotation to Iterator[str] and add Iterator to the imports from collections.abc
so the signature becomes def __iter__(self) -> Iterator[str]: and the import
list includes Iterator alongside any existing imports.
In `@pyisolate/_internal/sandbox.py`:
- Around line 70-79: The path validation contains a redundant equality test:
after the initial if normalized in FORBIDDEN_ADAPTER_PATHS: return False, the
loop in sandbox.py that iterates over FORBIDDEN_ADAPTER_PATHS should only check
parent relationships; remove the redundant "or normalized == forbidden" from the
condition inside that loop so it becomes solely forbidden.startswith(normalized
+ "/") while leaving the initial membership check intact.
In `@tests/test_rpc_contract.py`:
- Around line 33-40: The test relies on shared singleton state in MockRegistry
(its _counter and _store), causing order-dependent failures; modify the test
suite to reset that state before each test by adding an autouse pytest fixture
that calls a reset/clear on MockRegistry (or implement a clear() method on
MockRegistry if it doesn't exist) and use it from the fixture so
test_singleton_instance_persists can safely call MockRegistry().register(...)
and assert get("obj_0") consistently; reference MockRegistry, its register/get
methods, and the test_singleton_instance_persists test when making this change.
- Around line 172-183: The test test_type_error_propagates is misleading because
the TypeError is raised by the operation in TypedService.typed_method (string +
int) not by runtime type-checking; update the test docstring/comment to state it
verifies that exceptions raised inside RPC methods propagate (e.g., "Exceptions
raised during method execution should propagate" or mention the specific
operation causing the TypeError) so the intent is clear while leaving the test
logic (TypedService and typed_method) unchanged.
- Around line 110-154: Both tests set a new global asyncio event loop without
restoring the prior loop and rely on MockRegistry which appears to hold
global/singleton state; save the original loop at the start of each test (e.g.,
original_loop = asyncio.get_event_loop() or
asyncio.get_event_loop_policy().get_event_loop()), then after closing loop2 call
asyncio.set_event_loop(original_loop) to restore it; also ensure MockRegistry
global state is reset between tests by calling its clear/reset method (e.g.,
MockRegistry.clear() or MockRegistry.reset_instance()) or by constructing an
isolated registry instance if available; update
test_singleton_survives_loop_recreation and
test_singleton_data_persists_across_loops accordingly to restore the event loop
and clear MockRegistry state at the end.
- Around line 1-21: Add an autouse pytest fixture in tests/test_rpc_contract.py
that resets singleton state between tests: in the fixture (autouse=True) clear
ProxiedSingleton's internal instance registry (e.g., the backing dict/attribute
holding instances) and also reset MockRegistry state (call its reset/clear
method if present or reinitialize it) so each test gets a fresh
singleton/registry state; place the fixture near imports so it runs for every
test in this module.
In `@tests/test_rpc_message_format.py`:
- Around line 136-158: Add tests that exercise AttributeContainer's dict-like
and serialization behaviors: verify __getitem__ returns values and raises
KeyError for missing keys, test copy() produces an equal but independent
container, test get() with and without default, assert keys()/items()/values()
produce the expected iterables and that __iter__ and __len__ reflect the
underlying data, check __contains__ for present/absent keys, and test pickle
round-trip using __getstate__/__setstate__ preserves contents; reference the
AttributeContainer class and its methods (__getitem__, copy, get, keys, items,
values, __iter__, __len__, __contains__, __getstate__, __setstate__) when adding
these assertions.
- Around line 66-74: The test test_attribute_container_handled currently allows
either a dict or an AttributeContainer, which is too weak; decide on the
expected contract for _prepare_for_rpc (preferably that it returns a plain dict)
and make the test assert that exact behavior: call
_prepare_for_rpc(AttributeContainer({"a":1,"b":2})) and assert
isinstance(result, dict) and result == {"a":1,"b":2}; update the test docstring
to state the expected conversion to dict so future readers know this is
intentional (if the intended contract is to preserve AttributeContainer instead,
replace the assertions to assert hasattr(result, "_data") and that result._data
== {"a":1,"b":2} and update the docstring accordingly).
- Around line 59-64: The test test_attrdict_converted currently uses
isinstance(result, dict) which will be True for an AttrDict subclass; update the
assertion to ensure the result is a plain dict by asserting type(result) is dict
(or alternatively assert not isinstance(result, AttrDict)) after calling
_prepare_for_rpc(AttrDict(...)); modify the test body in test_attrdict_converted
to replace the isinstance check with a strict type check so it verifies that
_prepare_for_rpc returns a plain dict, and keep the value assertion
result["key"] == "value" unchanged.
♻️ Duplicate comments (17)
tests/conftest.py (2)
17-24: Remove hard-coded COMFYUI_ROOT fallback to keep tests portable.
The default path reintroduces a hard-coded assumption that the PR explicitly aimed to remove; prefer an explicit env var (and a clear error) to avoid CI and contributor mismatches.🐛 Proposed fix
-COMFYUI_ROOT = os.environ.get("COMFYUI_ROOT") or str(Path.home() / "ComfyUI") -if COMFYUI_ROOT not in sys.path: - sys.path.insert(0, COMFYUI_ROOT) - -# Set environment variable so child processes know ComfyUI location -os.environ.setdefault("COMFYUI_ROOT", COMFYUI_ROOT) +COMFYUI_ROOT = os.environ.get("COMFYUI_ROOT") +if not COMFYUI_ROOT: + raise EnvironmentError( + "COMFYUI_ROOT environment variable must be set for tests. " + "Set it to your ComfyUI installation path." + ) +if COMFYUI_ROOT not in sys.path: + sys.path.insert(0, COMFYUI_ROOT) + +# Set environment variable so child processes know ComfyUI location +os.environ.setdefault("COMFYUI_ROOT", COMFYUI_ROOT)
52-65: Let pytest monkeypatch handle restoration automatically.
Manual reset afteryieldis redundant and can be removed to keep the fixture minimal.♻️ Proposed simplification
monkeypatch.setattr(host_internal.Extension, "_Extension__launch", dummy_launch) yield - monkeypatch.setattr(host_internal.Extension, "_Extension__launch", original_launch)pyisolate/_internal/sandbox.py (4)
329-331: Duplicate entry in CUDA environment variable list.
PYISOLATE_ENABLE_CUDA_IPCappears twice in thecuda_env_varslist. Remove the duplicate.cuda_env_vars = [ "CUDA_HOME", "CUDA_PATH", "CUDA_VISIBLE_DEVICES", "NVIDIA_VISIBLE_DEVICES", "LD_LIBRARY_PATH", "PYTORCH_CUDA_ALLOC_CONF", "TORCH_CUDA_ARCH_LIST", "PYISOLATE_ENABLE_CUDA_IPC", - "PYISOLATE_ENABLE_CUDA_IPC", ]
243-257: Bareexcept Exceptionswallows unexpected errors.The bare
except Exceptioncatches all exceptions including unexpected ones. Since the intent is to handle missing comfy package, narrow this to(ImportError, ModuleNotFoundError):try: import comfy # type: ignore[import] # ... comfy path logic ... - except Exception: + except (ImportError, ModuleNotFoundError): pass
209-216: Confusing CUDA path skip logic.The condition excludes
/usr/local/paths from the skip logic, but the comment says/usr/localis in/usr. If/usris already bound,/usr/localshould be covered. The current logic creates redundant binds for/usr/local/*paths.for cuda_path in cuda_paths: if os.path.exists(cuda_path): - # Skip if already covered by /usr bind - if cuda_path.startswith("/usr/") and not cuda_path.startswith("/usr/local/"): - # Actually /usr/local is in /usr. - # Safe heuristic: if it starts with /usr, we assume covered. - continue + # Skip paths already covered by the /usr bind + if cuda_path.startswith("/usr/"): + continue cmd.extend(["--ro-bind", cuda_path, cuda_path])
128-137: Consider logging when running in degraded sandbox mode.When
restriction_model != RestrictionModel.NONE, the sandbox runs without user/PID namespace isolation. Consider emitting a debug/warning log for auditability:if restriction_model == RestrictionModel.NONE: # Full isolation available cmd.extend(["--unshare-user", "--unshare-pid"]) else: # Run in degraded mode (no user/pid namespace) + logger.warning( + "Running sandbox in degraded mode (restriction=%s) - " + "user/PID namespace isolation disabled", + restriction_model.value, + )pyisolate/_internal/rpc_serialization.py (1)
219-221: ParsePYISOLATE_DEBUG_RPCexplicitly.
bool(os.environ.get(...))treats any non-empty string (including"0"and"false") as truthy, enabling debug logging unintentionally.-debug_all_messages = bool(os.environ.get("PYISOLATE_DEBUG_RPC")) +debug_all_messages = os.environ.get("PYISOLATE_DEBUG_RPC", "").lower() in {"1", "true", "yes"}pyisolate/_internal/rpc_protocol.py (5)
162-163:asyncio.get_event_loop()can crash in non-main threads.In Python 3.10+,
asyncio.get_event_loop()raisesRuntimeErrorwhen called from a thread without an event loop. This makesAsyncRPC.__init__fragile when instantiated from worker threads.- self.default_loop = asyncio.get_event_loop() + try: + self.default_loop = asyncio.get_event_loop() + except RuntimeError: + self.default_loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.default_loop)
271-287: Makestop()idempotent and ensureshutdown()terminates_send_thread.
stop()raisesInvalidStateErrorif called twice since it doesn't checkdone().shutdown()never enqueuesNonetooutbox, so_send_threadblocks indefinitely onoutbox.get().async def stop(self) -> None: assert self.blocking_future is not None, ( "Cannot stop RPC: blocking_future is None. RPC event loop was never started or already stopped." ) - self.blocking_future.set_result(None) + if not self.blocking_future.done(): + self.blocking_future.set_result(None) def shutdown(self) -> None: """Signal intent to stop RPC. Suppresses connection errors.""" - self._stopping = True + if not self._stopping: + self._stopping = True + self.outbox.put(None) # Signal _send_thread to exit # If we have a blocking future, we can try to set it to unblock run_until_stopped
647-654: Replaceassertwith explicit exception ininject_instance.
assertstatements are stripped when Python runs with-Ooptimization. This guard is important for preventing silent behavior changes.def inject_instance(cls: type[T], instance: Any) -> None: - assert cls not in SingletonMetaclass._instances, ( - f"Cannot inject instance for {cls.__name__}: singleton already exists. " - f"Instance was likely created before injection attempt. " - f"Ensure inject_instance() is called before any instantiation." - ) + if cls in SingletonMetaclass._instances: + raise RuntimeError( + f"Cannot inject instance for {cls.__name__}: singleton already exists. " + f"Instance was likely created before injection attempt. " + f"Ensure inject_instance() is called before any instantiation." + ) SingletonMetaclass._instances[cls] = instanceNote: Tests expecting
AssertionErrorwill need to be updated to expectRuntimeError.
95-103: Validate constructor signature before instantiating inregister_class.If a
ProxiedSingletonsubclass requires constructor arguments,cls.__init__(local_instance)raises a rawTypeError. Consider pre-validating:def register_class(self, cls: type) -> None: + # Validate constructor accepts no required args + init_sig = inspect.signature(cls.__init__) + required = [ + p for name, p in init_sig.parameters.items() + if name != "self" and p.default is p.empty + and p.kind in (p.POSITIONAL_ONLY, p.POSITIONAL_OR_KEYWORD) + ] + if required: + raise TypeError( + f"{cls.__name__}.__init__ requires args {[p.name for p in required]}; " + "ProxiedSingleton classes must have no-arg constructors for local execution" + ) local_instance: Any = object.__new__(cls) cls.__init__(local_instance)
180-183:update_event_loopalso needs protection for thread safety.Same issue as
__init__:asyncio.get_event_loop()can raiseRuntimeErrorin non-main threads.def update_event_loop(self, loop: asyncio.AbstractEventLoop | None = None) -> None: with self._loop_lock: if loop is None: - loop = asyncio.get_event_loop() + try: + loop = asyncio.get_event_loop() + except RuntimeError: + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) self.default_loop = looptests/test_shared_additional.py (1)
55-85: Reset the event loop after closing it.
Leaving a closed loop as the current loop can break later tests.🔧 Proposed fix
@@ - loop.close() + loop.close() + asyncio.set_event_loop(None) @@ - loop.close() + loop.close() + asyncio.set_event_loop(None)Also applies to: 87-117
pyisolate/interfaces.py (1)
63-64: Clarify that the hook is required but can be a no‑op.📝 Proposed docstring tweak
- """Optional post-registration hook for API-specific setup.""" + """Post-registration hook for API-specific setup (can be a no-op)."""pyisolate/__init__.py (1)
16-32: Fix docstring example to use dicts and syncload_extension.📝 Proposed docstring fix
@@ - ... config = pyisolate.ExtensionManagerConfig(venv_root_path="./venvs") + ... config: pyisolate.ExtensionManagerConfig = {"venv_root_path": "./venvs"} @@ - ... extension = await manager.load_extension( - ... pyisolate.ExtensionConfig( + ... extension = manager.load_extension( + ... { ... name="my_extension", ... module_path="./extensions/my_extension", ... isolated=True, ... dependencies=["numpy>=2.0.0"], - ... ) + ... } ... )pyisolate/_internal/tensor_serializer.py (2)
129-185: Guard against infinite recursion when forcingfile_systemsharing.
Ifreduce_storagecontinues to returnrebuild_storage_fdafter the retry, the current recursion can loop forever.🛠️ Suggested guard
-def _serialize_cpu_tensor(t: torch.Tensor) -> dict[str, Any]: +def _serialize_cpu_tensor(t: torch.Tensor, _forced_file_system: bool = False) -> dict[str, Any]: @@ - elif sfunc.__name__ == "rebuild_storage_fd": + elif sfunc.__name__ == "rebuild_storage_fd": + if _forced_file_system: + raise RuntimeError( + "Unable to switch to file_system sharing strategy; " + "PyTorch still returned rebuild_storage_fd." + ) # Force file_system strategy for JSON-RPC compatibility torch.multiprocessing.set_sharing_strategy("file_system") t.share_memory_() - return _serialize_cpu_tensor(t) + return _serialize_cpu_tensor(t, _forced_file_system=True)
321-359: Don’t registerNoneserializers in the registry.
SerializerRegistry.register()expects a callable; storingNonecan break re‑serialization or callers that assume a function.🛠️ Suggested fix
def register_tensor_serializer(registry: Any) -> None: + def _identity(obj: Any) -> Any: + return obj @@ - registry.register("TensorRef", None, deserialize_tensor) + registry.register("TensorRef", _identity, deserialize_tensor) @@ - registry.register("TorchReduction", None, deserialize_tensor) + registry.register("TorchReduction", _identity, deserialize_tensor)
| **Diagnosis**: | ||
| ```bash | ||
| # Check if bwrap is available | ||
| which bwrap | ||
|
|
||
| # Check user namespace restrictions | ||
| cat /proc/sys/kernel/unprivileged_userns_clone | ||
| # Should be 1 for full isolation | ||
|
|
||
| # Check AppArmor restrictions (Ubuntu) | ||
| aa-status | grep bwrap | ||
| ``` |
There was a problem hiding this comment.
Add blank lines around fenced code blocks (MD031).
Several code fences are adjacent to text without blank lines; add a blank line before and after each fence in these sections to satisfy markdownlint.
Also applies to: 76-81, 98-107, 113-128
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
55-55: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@docs/debugging.md` around lines 54 - 65, The markdown has fenced code blocks
that are directly adjacent to text, violating MD031; edit the affected sections
in debugging.md (around the shown snippet and similarly at the blocks noted for
lines 76-81, 98-107, 113-128) and insert a blank line both immediately before
and immediately after each ``` fenced block so every fence is separated from
surrounding paragraphs; ensure all code fences in those areas follow this
pattern to satisfy MD031.
| 1. Enable debug logging and capture output | ||
| 2. Include pyisolate version: `python -c "import pyisolate; print(pyisolate.__version__)"` | ||
| 3. Include Python version and platform info | ||
| 4. Check existing issues at: https://github.com/anthropics/claude-code/issues |
There was a problem hiding this comment.
Replace the bare URL with a Markdown link (MD034).
✅ Suggested fix
-4. Check existing issues at: https://github.com/anthropics/claude-code/issues
+4. Check existing issues at: [issue tracker](https://github.com/anthropics/claude-code/issues)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Enable debug logging and capture output | |
| 2. Include pyisolate version: `python -c "import pyisolate; print(pyisolate.__version__)"` | |
| 3. Include Python version and platform info | |
| 4. Check existing issues at: https://github.com/anthropics/claude-code/issues | |
| 1. Enable debug logging and capture output | |
| 2. Include pyisolate version: `python -c "import pyisolate; print(pyisolate.__version__)"` | |
| 3. Include Python version and platform info | |
| 4. Check existing issues at: [issue tracker](https://github.com/anthropics/claude-code/issues) |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
259-259: Bare URL used
(MD034, no-bare-urls)
🤖 Prompt for AI Agents
In `@docs/debugging.md` around lines 256 - 259, Replace the bare URL on the list
item "Check existing issues at:
https://github.com/anthropics/claude-code/issues" with an inline Markdown link
to satisfy MD034; update the line in the docs entry (the list item containing
that URL) to use link syntax such as "Check existing issues at:
[anthropics/claude-code
issues](https://github.com/anthropics/claude-code/issues)" so the raw URL is no
longer bare.
| **Mitigation**: Increase retention for slow environments: | ||
| ```python | ||
| from pyisolate._internal.tensor_serializer import _tensor_keeper | ||
| _tensor_keeper.retention_seconds = 60.0 # 60 seconds | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Document that this is an internal/unsupported configuration.
Directly modifying _tensor_keeper.retention_seconds relies on internal implementation details. Consider either exposing this as a public configuration option or noting that this is an advanced/unsupported workaround.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
39-39: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@docs/edge_cases.md` around lines 38 - 42, The doc snippet shows direct
modification of the internal symbol _tensor_keeper.retention_seconds from
pyisolate._internal.tensor_serializer; update the docs to warn this is an
internal/unsupported workaround (mention the exact symbol
_tensor_keeper.retention_seconds and module
pyisolate._internal.tensor_serializer) and either add guidance that this should
not be relied upon long-term or add a note suggesting to expose a public API
(e.g., a configuration option on TensorSerializer or a top-level config) instead
of instructing users to mutate internal state directly.
|
|
||
| **Behavior**: Call may fail or timeout. | ||
|
|
||
| **Handling**: Check connection state before calls; handle gracefully. |
There was a problem hiding this comment.
Minor grammar fix: "time out" (verb) vs "timeout" (noun).
"Timeout" as a noun is one word, but as a verb it should be "time out."
-**Behavior**: Call may fail or timeout.
+**Behavior**: Call may fail or time out.🧰 Tools
🪛 LanguageTool
[grammar] ~160-~160: Ensure spelling is correct
Context: ...losing. Behavior: Call may fail or timeout. Handling: Check connection state ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In `@docs/edge_cases.md` around lines 159 - 162, The phrasing in the docs uses
"timeout" as a verb in the line "**Behavior**: Call may fail or timeout." —
change "timeout" to the verb form "time out" so it reads "**Behavior**: Call may
fail or time out."; search the same document for other occurrences of the verb
usage of "timeout" and correct them to "time out" to keep grammar consistent.
| ``` | ||
| RestrictionModel.NONE | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add language specifiers to fenced code blocks.
These code blocks display text/enum values but lack language specifiers. Consider using text or python for consistency and to satisfy markdown linting.
-```
-RestrictionModel.NONE
-```
+```text
+RestrictionModel.NONE
+```Similarly for lines 71-73.
Also applies to: 71-73
🤖 Prompt for AI Agents
In `@docs/platform_compatibility.md` around lines 29 - 31, The fenced code blocks
showing enum values (e.g., RestrictionModel.NONE) lack language specifiers;
update those backtick fence blocks to include a language tag such as text (or
python) so markdown linters render them consistently—locate the occurrences that
contain "RestrictionModel.NONE" and the similar block at the second location
(lines referenced in the review) and change their opening fences from ``` to
```text (or ```python) accordingly.
| def test_singleton_survives_loop_recreation(self): | ||
| """Singleton instance survives event loop recreation.""" | ||
| # Create initial loop | ||
| loop1 = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop1) | ||
|
|
||
| # Create singleton and store data | ||
| registry = MockRegistry() | ||
| obj_id = registry.register("loop1_object") | ||
|
|
||
| # Close loop1 | ||
| loop1.close() | ||
|
|
||
| # Create new loop | ||
| loop2 = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop2) | ||
|
|
||
| # Singleton should still work | ||
| result = registry.get(obj_id) | ||
| assert result == "loop1_object" | ||
|
|
||
| # Cleanup | ||
| loop2.close() | ||
|
|
||
| def test_singleton_data_persists_across_loops(self): | ||
| """Data stored in singleton persists across event loops.""" | ||
| # First loop | ||
| loop1 = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop1) | ||
|
|
||
| registry = MockRegistry() | ||
| id1 = registry.register("first") | ||
| id2 = registry.register("second") | ||
|
|
||
| loop1.close() | ||
|
|
||
| # Second loop | ||
| loop2 = asyncio.new_event_loop() | ||
| asyncio.set_event_loop(loop2) | ||
|
|
||
| # All data should still be accessible | ||
| assert registry.get(id1) == "first" | ||
| assert registry.get(id2) == "second" | ||
|
|
||
| loop2.close() |
There was a problem hiding this comment.
Event loop tests don't restore original loop state.
Both tests manipulate the global event loop via asyncio.set_event_loop() but don't restore the original loop after completion. This can affect subsequent tests that rely on the event loop. Additionally, MockRegistry state accumulates across these tests.
🔧 Proposed fix: Save and restore event loop
def test_singleton_survives_loop_recreation(self):
"""Singleton instance survives event loop recreation."""
+ original_loop = asyncio.get_event_loop_policy().get_event_loop()
+
# Create initial loop
loop1 = asyncio.new_event_loop()
asyncio.set_event_loop(loop1)
# Create singleton and store data
registry = MockRegistry()
+ registry.clear() # Reset for deterministic test
obj_id = registry.register("loop1_object")
# Close loop1
loop1.close()
# Create new loop
loop2 = asyncio.new_event_loop()
asyncio.set_event_loop(loop2)
# Singleton should still work
result = registry.get(obj_id)
assert result == "loop1_object"
# Cleanup
loop2.close()
+ asyncio.set_event_loop(original_loop)🤖 Prompt for AI Agents
In `@tests/test_rpc_contract.py` around lines 110 - 154, Both tests set a new
global asyncio event loop without restoring the prior loop and rely on
MockRegistry which appears to hold global/singleton state; save the original
loop at the start of each test (e.g., original_loop = asyncio.get_event_loop()
or asyncio.get_event_loop_policy().get_event_loop()), then after closing loop2
call asyncio.set_event_loop(original_loop) to restore it; also ensure
MockRegistry global state is reset between tests by calling its clear/reset
method (e.g., MockRegistry.clear() or MockRegistry.reset_instance()) or by
constructing an isolated registry instance if available; update
test_singleton_survives_loop_recreation and
test_singleton_data_persists_across_loops accordingly to restore the event loop
and clear MockRegistry state at the end.
| def test_type_error_propagates(self): | ||
| """TypeError in RPC methods should propagate.""" | ||
|
|
||
| class TypedService(ProxiedSingleton): | ||
| def typed_method(self, value: int) -> int: | ||
| return value + 1 | ||
|
|
||
| service = TypedService() | ||
|
|
||
| # Wrong type should raise TypeError | ||
| with pytest.raises(TypeError): | ||
| service.typed_method("not an int") |
There was a problem hiding this comment.
Misleading test: TypeError comes from operation, not type checking.
Python doesn't enforce type hints at runtime. The TypeError raised here comes from "not an int" + 1 (cannot add string and int), not from type validation. The comment "Wrong type should raise TypeError" is misleading.
If the intent is to test exception propagation, the test works but should clarify the actual error source:
📝 Suggested clarification
def test_type_error_propagates(self):
- """TypeError in RPC methods should propagate."""
+ """TypeError from operations should propagate."""
class TypedService(ProxiedSingleton):
def typed_method(self, value: int) -> int:
return value + 1
service = TypedService()
- # Wrong type should raise TypeError
+ # Operation fails with TypeError (string + int)
with pytest.raises(TypeError):
service.typed_method("not an int")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_type_error_propagates(self): | |
| """TypeError in RPC methods should propagate.""" | |
| class TypedService(ProxiedSingleton): | |
| def typed_method(self, value: int) -> int: | |
| return value + 1 | |
| service = TypedService() | |
| # Wrong type should raise TypeError | |
| with pytest.raises(TypeError): | |
| service.typed_method("not an int") | |
| def test_type_error_propagates(self): | |
| """TypeError from operations should propagate.""" | |
| class TypedService(ProxiedSingleton): | |
| def typed_method(self, value: int) -> int: | |
| return value + 1 | |
| service = TypedService() | |
| # Operation fails with TypeError (string + int) | |
| with pytest.raises(TypeError): | |
| service.typed_method("not an int") |
🤖 Prompt for AI Agents
In `@tests/test_rpc_contract.py` around lines 172 - 183, The test
test_type_error_propagates is misleading because the TypeError is raised by the
operation in TypedService.typed_method (string + int) not by runtime
type-checking; update the test docstring/comment to state it verifies that
exceptions raised inside RPC methods propagate (e.g., "Exceptions raised during
method execution should propagate" or mention the specific operation causing the
TypeError) so the intent is clear while leaving the test logic (TypedService and
typed_method) unchanged.
| def test_attrdict_converted(self): | ||
| """AttrDict is converted to plain dict.""" | ||
| data = AttrDict({"key": "value"}) | ||
| result = _prepare_for_rpc(data) | ||
| assert isinstance(result, dict) | ||
| assert result["key"] == "value" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clarify AttrDict conversion test assertion.
AttrDict is a dict subclass, so isinstance(result, dict) will return True even if the result is still an AttrDict. If the intent is to verify conversion to a plain dict, consider checking type(result) is dict instead.
💡 Suggested refinement
def test_attrdict_converted(self):
"""AttrDict is converted to plain dict."""
data = AttrDict({"key": "value"})
result = _prepare_for_rpc(data)
- assert isinstance(result, dict)
+ # AttrDict is a dict subclass; verify conversion to plain dict
+ assert type(result) is dict or isinstance(result, dict)
assert result["key"] == "value"🤖 Prompt for AI Agents
In `@tests/test_rpc_message_format.py` around lines 59 - 64, The test
test_attrdict_converted currently uses isinstance(result, dict) which will be
True for an AttrDict subclass; update the assertion to ensure the result is a
plain dict by asserting type(result) is dict (or alternatively assert not
isinstance(result, AttrDict)) after calling _prepare_for_rpc(AttrDict(...));
modify the test body in test_attrdict_converted to replace the isinstance check
with a strict type check so it verifies that _prepare_for_rpc returns a plain
dict, and keep the value assertion result["key"] == "value" unchanged.
| def test_attribute_container_handled(self): | ||
| """AttributeContainer is handled appropriately.""" | ||
| data = AttributeContainer({"a": 1, "b": 2}) | ||
| result = _prepare_for_rpc(data) | ||
| # May be dict or container depending on implementation | ||
| if isinstance(result, dict): | ||
| assert result["a"] == 1 | ||
| else: | ||
| assert hasattr(result, "_data") |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Weak assertion in AttributeContainer test.
The test accepts both dict and container outcomes, which doesn't verify specific contract behavior. If _prepare_for_rpc has a defined behavior for AttributeContainer, the test should assert that specific outcome. If behavior is intentionally flexible, consider documenting why in the docstring.
🤖 Prompt for AI Agents
In `@tests/test_rpc_message_format.py` around lines 66 - 74, The test
test_attribute_container_handled currently allows either a dict or an
AttributeContainer, which is too weak; decide on the expected contract for
_prepare_for_rpc (preferably that it returns a plain dict) and make the test
assert that exact behavior: call
_prepare_for_rpc(AttributeContainer({"a":1,"b":2})) and assert
isinstance(result, dict) and result == {"a":1,"b":2}; update the test docstring
to state the expected conversion to dict so future readers know this is
intentional (if the intended contract is to preserve AttributeContainer instead,
replace the assertions to assert hasattr(result, "_data") and that result._data
== {"a":1,"b":2} and update the docstring accordingly).
| class TestAttributeContainerBehavior: | ||
| """Tests for AttributeContainer helper class behavior.""" | ||
|
|
||
| def test_attribute_access(self): | ||
| """AttributeContainer wraps dict with attribute access.""" | ||
| container = AttributeContainer({"x": 10, "y": 20}) | ||
|
|
||
| assert container.x == 10 | ||
| assert container.y == 20 | ||
|
|
||
| def test_data_property(self): | ||
| """_data property returns underlying dict.""" | ||
| data = {"key": "value"} | ||
| container = AttributeContainer(data) | ||
|
|
||
| assert container._data == data | ||
|
|
||
| def test_missing_attribute_raises(self): | ||
| """Missing attributes raise AttributeError.""" | ||
| container = AttributeContainer({"existing": True}) | ||
|
|
||
| with pytest.raises(AttributeError): | ||
| _ = container.missing |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider expanding AttributeContainer test coverage.
The AttributeContainer class (per the relevant snippets) implements __getitem__, copy(), get(), keys(), items(), values(), __iter__, __len__, __contains__, and pickle support (__getstate__/__setstate__). Current tests cover only attribute access and missing attributes. Consider adding tests for dict-like access methods if they're part of the public contract.
🤖 Prompt for AI Agents
In `@tests/test_rpc_message_format.py` around lines 136 - 158, Add tests that
exercise AttributeContainer's dict-like and serialization behaviors: verify
__getitem__ returns values and raises KeyError for missing keys, test copy()
produces an equal but independent container, test get() with and without
default, assert keys()/items()/values() produce the expected iterables and that
__iter__ and __len__ reflect the underlying data, check __contains__ for
present/absent keys, and test pickle round-trip using __getstate__/__setstate__
preserves contents; reference the AttributeContainer class and its methods
(__getitem__, copy, get, keys, items, values, __iter__, __len__, __contains__,
__getstate__, __setstate__) when adding these assertions.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @.github/workflows/build-wheels.yml:
- Around line 14-16: The concurrency block currently sets cancel-in-progress:
false which queues redundant runs; change it to cancel only for PR builds by
making cancel-in-progress conditional on the event (e.g. use github.event_name
or github.ref to detect pull_request) so PR pushes cancel superseded runs while
main/releases keep non-canceling behavior—update the concurrency attributes
(group, cancel-in-progress) in the build-wheels workflow using the conditional
expression involving github.event_name (or github.ref) to toggle
cancel-in-progress appropriately.
- Around line 122-148: The piped find | while IFS= read -r -d '' src; do ...;
done relies on implicit shell options for error propagation and can be fragile;
replace the pipeline with process substitution so the loop reads from < <(find
downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0) (the step
named "Collect distributions" and the while loop reading "src") to ensure any
exit 1 inside the loop properly terminates the job and to make the intent
explicit and portable across shells.
- Around line 122-148: The failure inside the piped loop in the "Collect
distributions" step (the find ... | while IFS= read -r -d '' src; do ... exit 1
... done) only exits the subshell so the job can continue; change the pipeline
so the loop runs in the main shell (e.g. replace the pipe with process
substitution: while IFS= read -r -d '' src; do ... done < <(find downloaded
-type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0)) or enable pipefail at
the top of the run (set -o pipefail) so the outer step fails when that exit 1 is
hit; ensure the existing exit 1 remains to stop the step on conflicts.
- Around line 86-94: The workflow currently finds an sdist into sdist_path but
never installs or validates it; update the job to actually test the sdist by
creating and activating an isolated venv in temp_dir, installing the found
tarball ($sdist_path) into that venv (e.g., python -m pip install
"$sdist_path"), then run an import/test command (replace the current python -c
"import pyisolate; print(pyisolate.__version__") inside that venv to verify the
package imports; ensure you still fail the step on install/import errors and
remove the dead-only check if you prefer the alternate option.
- Around line 23-27: The workflow is building redundant platform-identical
wheels: `strategy.matrix.os` currently lists ubuntu/macos/windows but `python -m
build` produces `py3-none-any` wheels for this pure-Python package, so reduce
the build matrix to a single OS (e.g., keep `ubuntu-latest`) in the
`strategy.matrix.os` for the wheel-building job and run platform-specific test
jobs separately instead of rebuilding artifacts; also extend
`strategy.matrix.python-version` to include "3.13" (in the same array as
"3.10","3.11","3.12") so the build/test matrix covers the newer Python release,
and ensure the `publish` job still handles deduplication as before.
- Around line 149-152: The workflow currently uses the mutable ref
"pypa/gh-action-pypi-publish@release/v1" for the Publish to PyPI via OIDC step;
replace that ref with the specific commit SHA for the current release to pin the
action (use the commit SHA from the action's repository release you intend to
use) and update any other high-risk actions (actions/checkout@v4,
actions/setup-python@v5, actions/upload-artifact@v4,
actions/download-artifact@v4) to their respective commit SHAs as well so the
publish step (and other artifact steps) run a fixed, auditable version instead
of a branch/tag ref.
| concurrency: | ||
| group: build-wheels-${{ github.workflow }}-${{ github.ref || github.run_id }} | ||
| cancel-in-progress: false |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider enabling cancel-in-progress for PR builds.
With cancel-in-progress: false, multiple pushes to the same PR branch will queue builds instead of canceling stale runs. This can waste CI minutes during active development. Consider splitting concurrency or using a conditional:
♻️ Suggested improvement
concurrency:
group: build-wheels-${{ github.workflow }}-${{ github.ref || github.run_id }}
- cancel-in-progress: false
+ cancel-in-progress: ${{ github.event_name == 'pull_request' }}This preserves non-canceling behavior for main pushes and releases while canceling superseded PR runs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| concurrency: | |
| group: build-wheels-${{ github.workflow }}-${{ github.ref || github.run_id }} | |
| cancel-in-progress: false | |
| concurrency: | |
| group: build-wheels-${{ github.workflow }}-${{ github.ref || github.run_id }} | |
| cancel-in-progress: ${{ github.event_name == 'pull_request' }} |
🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 14 - 16, The concurrency
block currently sets cancel-in-progress: false which queues redundant runs;
change it to cancel only for PR builds by making cancel-in-progress conditional
on the event (e.g. use github.event_name or github.ref to detect pull_request)
so PR pushes cancel superseded runs while main/releases keep non-canceling
behavior—update the concurrency attributes (group, cancel-in-progress) in the
build-wheels workflow using the conditional expression involving
github.event_name (or github.ref) to toggle cancel-in-progress appropriately.
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| os: [ubuntu-latest, macos-latest, windows-latest] | ||
| python-version: ["3.10", "3.11", "3.12"] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Get the repository structure overview
echo "=== Repository structure ==="
git ls-files | head -30
echo -e "\n=== Looking for setup.py, setup.cfg, pyproject.toml ==="
fd -t f -e py -g 'setup.py' -o -g 'setup.cfg' -o -g 'pyproject.toml'
echo -e "\n=== Checking for C/C++ source files ==="
fd -e c -e cpp -e h -e pyx | head -20Repository: Comfy-Org/pyisolate
Length of output: 1006
🏁 Script executed:
#!/bin/bash
echo "=== Looking for pyproject.toml ==="
find . -maxdepth 2 -name 'pyproject.toml' -type f
echo -e "\n=== Looking for setup.py or setup.cfg ==="
find . -maxdepth 2 -name 'setup.py' -o -name 'setup.cfg' -type f
echo -e "\n=== Checking for C/C++ files ==="
find . -type f \( -name '*.c' -o -name '*.cpp' -o -name '*.h' -o -name '*.pyx' -o -name '*.pxd' \)
echo -e "\n=== Looking for build-related files ==="
find . -maxdepth 2 -name 'build' -o -name 'CMakeLists.txt' -o -name 'Makefile'
echo -e "\n=== Checking MANIFEST.in ==="
cat MANIFEST.inRepository: Comfy-Org/pyisolate
Length of output: 469
🏁 Script executed:
#!/bin/bash
echo "=== pyproject.toml ==="
cat pyproject.toml
echo -e "\n=== setup.py ==="
cat setup.pyRepository: Comfy-Org/pyisolate
Length of output: 4279
🏁 Script executed:
#!/bin/bash
echo "=== build-wheels.yml workflow ==="
cat .github/workflows/build-wheels.ymlRepository: Comfy-Org/pyisolate
Length of output: 4378
Redundant cross-platform builds for a pure Python package.
Pyisolate has no C extensions and uses a standard setuptools build backend, which means python -m build produces identical py3-none-any wheels on every platform. Running 9 matrix legs that generate identical artifacts wastes CI minutes. The publish job even acknowledges this by deduplicating "byte-identical files produced in multiple matrix legs." Consider building on a single OS and running platform-specific test jobs separately if cross-platform validation is needed.
Additionally, Python 3.13 (released Oct 2024) should be added to the test matrix if the project supports it, as the requires-python = ">=3.10" constraint permits it.
🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 23 - 27, The workflow is
building redundant platform-identical wheels: `strategy.matrix.os` currently
lists ubuntu/macos/windows but `python -m build` produces `py3-none-any` wheels
for this pure-Python package, so reduce the build matrix to a single OS (e.g.,
keep `ubuntu-latest`) in the `strategy.matrix.os` for the wheel-building job and
run platform-specific test jobs separately instead of rebuilding artifacts; also
extend `strategy.matrix.python-version` to include "3.13" (in the same array as
"3.10","3.11","3.12") so the build/test matrix covers the newer Python release,
and ensure the `publish` job still handles deduplication as before.
| sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)" | ||
| if [ -z "$sdist_path" ]; then | ||
| echo "No sdist artifact found." | ||
| exit 1 | ||
| fi | ||
|
|
||
| temp_dir="$(mktemp -d)" | ||
| cd "$temp_dir" | ||
| python -c "import pyisolate; print(pyisolate.__version__)" |
There was a problem hiding this comment.
sdist is located but never tested.
Lines 86-90 find the sdist and fail if absent, but the path is never used — the sdist is neither installed nor verified. Either install it in a separate venv to confirm it builds/imports correctly, or remove this dead check (the publish job already validates sdist presence).
♻️ Option A: Remove dead check
python -m pip install "$wheel_path"
- sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)"
- if [ -z "$sdist_path" ]; then
- echo "No sdist artifact found."
- exit 1
- fi
-
temp_dir="$(mktemp -d)"
cd "$temp_dir"
python -c "import pyisolate; print(pyisolate.__version__)"♻️ Option B: Actually test the sdist
sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)"
if [ -z "$sdist_path" ]; then
echo "No sdist artifact found."
exit 1
fi
temp_dir="$(mktemp -d)"
cd "$temp_dir"
python -c "import pyisolate; print(pyisolate.__version__)"
+
+ # Verify sdist builds and installs correctly
+ python -m venv .venv-sdist
+ source .venv-sdist/bin/activate
+ python -m pip install --upgrade pip
+ python -m pip install "$OLDPWD/$sdist_path"
+ python -c "import pyisolate; print(pyisolate.__version__)"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)" | |
| if [ -z "$sdist_path" ]; then | |
| echo "No sdist artifact found." | |
| exit 1 | |
| fi | |
| temp_dir="$(mktemp -d)" | |
| cd "$temp_dir" | |
| python -c "import pyisolate; print(pyisolate.__version__)" | |
| python -m pip install "$wheel_path" | |
| temp_dir="$(mktemp -d)" | |
| cd "$temp_dir" | |
| python -c "import pyisolate; print(pyisolate.__version__)" |
| sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)" | |
| if [ -z "$sdist_path" ]; then | |
| echo "No sdist artifact found." | |
| exit 1 | |
| fi | |
| temp_dir="$(mktemp -d)" | |
| cd "$temp_dir" | |
| python -c "import pyisolate; print(pyisolate.__version__)" | |
| sdist_path="$(find artifacts -type f -name '*.tar.gz' | head -n 1)" | |
| if [ -z "$sdist_path" ]; then | |
| echo "No sdist artifact found." | |
| exit 1 | |
| fi | |
| temp_dir="$(mktemp -d)" | |
| cd "$temp_dir" | |
| python -c "import pyisolate; print(pyisolate.__version__)" | |
| # Verify sdist builds and installs correctly | |
| python -m venv .venv-sdist | |
| source .venv-sdist/bin/activate | |
| python -m pip install --upgrade pip | |
| python -m pip install "$OLDPWD/$sdist_path" | |
| python -c "import pyisolate; print(pyisolate.__version__)" |
🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 86 - 94, The workflow
currently finds an sdist into sdist_path but never installs or validates it;
update the job to actually test the sdist by creating and activating an isolated
venv in temp_dir, installing the found tarball ($sdist_path) into that venv
(e.g., python -m pip install "$sdist_path"), then run an import/test command
(replace the current python -c "import pyisolate; print(pyisolate.__version__")
inside that venv to verify the package imports; ensure you still fail the step
on install/import errors and remove the dead-only check if you prefer the
alternate option.
| - name: Collect distributions | ||
| shell: bash | ||
| run: | | ||
| mkdir -p dist | ||
| find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0 | while IFS= read -r -d '' src; do | ||
| base="$(basename "$src")" | ||
| dest="dist/$base" | ||
| if [ -e "$dest" ]; then | ||
| # Deduplicate byte-identical files produced in multiple matrix legs. | ||
| if cmp -s "$src" "$dest"; then | ||
| continue | ||
| fi | ||
| echo "Conflicting distribution filename with different content: $base" | ||
| exit 1 | ||
| fi | ||
| cp "$src" "$dest" | ||
| done | ||
|
|
||
| wheel_count="$(find dist -maxdepth 1 -type f -name '*.whl' | wc -l)" | ||
| sdist_count="$(find dist -maxdepth 1 -type f -name '*.tar.gz' | wc -l)" | ||
| if [ "$wheel_count" -eq 0 ] || [ "$sdist_count" -eq 0 ]; then | ||
| echo "Expected at least one wheel and one sdist for publish." | ||
| exit 1 | ||
| fi | ||
|
|
||
| ls -l dist | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Correction to previous comment: GitHub Actions' bash shell prepends set -eo pipefail by default, so the exit 1 inside the piped while loop will propagate in this context. However, this reliance on implicit shell options is fragile — using process substitution makes the intent explicit and portable:
♻️ More robust alternative using process substitution
- find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0 | while IFS= read -r -d '' src; do
+ while IFS= read -r -d '' src; do
base="$(basename "$src")"
dest="dist/$base"
if [ -e "$dest" ]; then
if cmp -s "$src" "$dest"; then
continue
fi
echo "Conflicting distribution filename with different content: $base"
exit 1
fi
cp "$src" "$dest"
- done
+ done < <(find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Collect distributions | |
| shell: bash | |
| run: | | |
| mkdir -p dist | |
| find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0 | while IFS= read -r -d '' src; do | |
| base="$(basename "$src")" | |
| dest="dist/$base" | |
| if [ -e "$dest" ]; then | |
| # Deduplicate byte-identical files produced in multiple matrix legs. | |
| if cmp -s "$src" "$dest"; then | |
| continue | |
| fi | |
| echo "Conflicting distribution filename with different content: $base" | |
| exit 1 | |
| fi | |
| cp "$src" "$dest" | |
| done | |
| wheel_count="$(find dist -maxdepth 1 -type f -name '*.whl' | wc -l)" | |
| sdist_count="$(find dist -maxdepth 1 -type f -name '*.tar.gz' | wc -l)" | |
| if [ "$wheel_count" -eq 0 ] || [ "$sdist_count" -eq 0 ]; then | |
| echo "Expected at least one wheel and one sdist for publish." | |
| exit 1 | |
| fi | |
| ls -l dist | |
| - name: Collect distributions | |
| shell: bash | |
| run: | | |
| mkdir -p dist | |
| while IFS= read -r -d '' src; do | |
| base="$(basename "$src")" | |
| dest="dist/$base" | |
| if [ -e "$dest" ]; then | |
| # Deduplicate byte-identical files produced in multiple matrix legs. | |
| if cmp -s "$src" "$dest"; then | |
| continue | |
| fi | |
| echo "Conflicting distribution filename with different content: $base" | |
| exit 1 | |
| fi | |
| cp "$src" "$dest" | |
| done < <(find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0) | |
| wheel_count="$(find dist -maxdepth 1 -type f -name '*.whl' | wc -l)" | |
| sdist_count="$(find dist -maxdepth 1 -type f -name '*.tar.gz' | wc -l)" | |
| if [ "$wheel_count" -eq 0 ] || [ "$sdist_count" -eq 0 ]; then | |
| echo "Expected at least one wheel and one sdist for publish." | |
| exit 1 | |
| fi | |
| ls -l dist |
🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 122 - 148, The piped find |
while IFS= read -r -d '' src; do ...; done relies on implicit shell options for
error propagation and can be fragile; replace the pipeline with process
substitution so the loop reads from < <(find downloaded -type f \( -name "*.whl"
-o -name "*.tar.gz" \) -print0) (the step named "Collect distributions" and the
while loop reading "src") to ensure any exit 1 inside the loop properly
terminates the job and to make the intent explicit and portable across shells.
exit 1 inside piped while loop won't fail the step.
The find ... | while read ... construct runs the while body in a subshell (in bash). The exit 1 on line 135 exits only that subshell — the outer script continues, potentially publishing conflicting distributions. Use process substitution or set -o pipefail to propagate the failure:
🐛 Proposed fix using process substitution
- name: Collect distributions
shell: bash
run: |
+ set -euo pipefail
mkdir -p dist
- find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0 | while IFS= read -r -d '' src; do
+ while IFS= read -r -d '' src; do
base="$(basename "$src")"
dest="dist/$base"
if [ -e "$dest" ]; then
# Deduplicate byte-identical files produced in multiple matrix legs.
if cmp -s "$src" "$dest"; then
continue
fi
echo "Conflicting distribution filename with different content: $base"
exit 1
fi
cp "$src" "$dest"
- done
+ done < <(find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \) -print0)
wheel_count="$(find dist -maxdepth 1 -type f -name '*.whl' | wc -l)"🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 122 - 148, The failure
inside the piped loop in the "Collect distributions" step (the find ... | while
IFS= read -r -d '' src; do ... exit 1 ... done) only exits the subshell so the
job can continue; change the pipeline so the loop runs in the main shell (e.g.
replace the pipe with process substitution: while IFS= read -r -d '' src; do ...
done < <(find downloaded -type f \( -name "*.whl" -o -name "*.tar.gz" \)
-print0)) or enable pipefail at the top of the run (set -o pipefail) so the
outer step fails when that exit 1 is hit; ensure the existing exit 1 remains to
stop the step on conflicts.
| - name: Publish to PyPI via OIDC | ||
| uses: pypa/gh-action-pypi-publish@release/v1 | ||
| with: | ||
| packages-dir: dist |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
pypa gh-action-pypi-publish release/v1 commit SHA latest 2025
💡 Result:
For pypa/gh-action-pypi-publish, the latest 2025 release was v1.13.0 (released Sep 4, 2025) [1]. That release corresponds to commit ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e [2].
So, pypa/gh-action-pypi-publish@release/v1 (latest in 2025) ⇒ ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e. [1][2]
Sources: [1] [2]
🌐 Web query:
GitHub Actions best practices pinning commit SHA branch reference security
💡 Result:
Why not use branches/tags (e.g., @main, @v3) for uses:
- Branches and movable tags can be updated to point to new code—so your workflow may start running different (potentially malicious) code without any change in your repo. GitHub recommends pinning to a full-length commit SHA as the most reliable immutable reference. [1][2]
Best practice: pin every third‑party action (and reusable workflow) to a full commit SHA
Use:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7Key points:
- Use the full 40-char SHA (not a short SHA). [1][2]
- Prefer a SHA that corresponds to an upstream tagged release, and note the human-readable version in a comment (helps audits and upgrades). [2]
- Verify the SHA is from the action’s official repository (not a fork). [1]
Keep pinned SHAs updated safely
- Enable Dependabot version updates for GitHub Actions so you still get automated upgrade PRs even when pinned to SHAs (common pattern is “pin to SHA + Dependabot updates”). [2]
Enforce it so PRs can’t add unpinned uses:
- GitHub Actions policies can require SHA pinning, causing workflows that use unpinned actions to fail. This can be enforced at enterprise/org/repo level via the “allowed actions and reusable workflows” policy. [2][3]
Optional helpers (repo-level checks)
- Use a “SHA pinning” checker action in CI to detect any
uses:entries that aren’t pinned to full SHAs. [4]
Sources: GitHub docs on security hardening/pinning [1], GitHub Well-Architected guidance [2], GitHub Changelog on SHA-pinning enforcement policy [3], “Ensure SHA Pinned Actions” marketplace tool [4].
🏁 Script executed:
# Check if we can find any documentation or references within the repo about action pinning
git ls-files | head -20Repository: Comfy-Org/pyisolate
Length of output: 515
Pin gh-action-pypi-publish to a commit SHA for supply-chain safety.
Using @release/v1 (a branch ref) means the action's contents can change without any update to this workflow. For a publishing step with id-token: write permissions, this is a security-sensitive path. Pin to the current release commit SHA:
🔒 Suggested fix
- name: Publish to PyPI via OIDC
- uses: pypa/gh-action-pypi-publish@release/v1
+ uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
with:
packages-dir: distThe same recommendation applies to actions/checkout@v4, actions/setup-python@v5, actions/upload-artifact@v4, and actions/download-artifact@v4 — though the publish action is most critical given its elevated permissions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Publish to PyPI via OIDC | |
| uses: pypa/gh-action-pypi-publish@release/v1 | |
| with: | |
| packages-dir: dist | |
| - name: Publish to PyPI via OIDC | |
| uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0 | |
| with: | |
| packages-dir: dist |
🤖 Prompt for AI Agents
In @.github/workflows/build-wheels.yml around lines 149 - 152, The workflow
currently uses the mutable ref "pypa/gh-action-pypi-publish@release/v1" for the
Publish to PyPI via OIDC step; replace that ref with the specific commit SHA for
the current release to pin the action (use the commit SHA from the action's
repository release you intend to use) and update any other high-risk actions
(actions/checkout@v4, actions/setup-python@v5, actions/upload-artifact@v4,
actions/download-artifact@v4) to their respective commit SHAs as well so the
publish step (and other artifact steps) run a fixed, auditable version instead
of a branch/tag ref.
There was a problem hiding this comment.
Actionable comments posted: 22
🤖 Fix all issues with AI agents
In `@pyisolate/_internal/environment.py`:
- Line 349: The code uses an assert to check proc.stdout (`assert proc.stdout is
not None`) which is unsafe in production since asserts are removed with -O;
replace this with an explicit runtime check such as `if proc.stdout is None:
raise RuntimeError("expected subprocess stdout but got None")` (or a more
specific exception) before the `for line in proc.stdout` loop so the code fails
deterministically and with a clear message; update any surrounding code in the
same function that assumes proc.stdout is non-None (reference symbol: proc and
proc.stdout).
- Line 174: The import "from packaging.requirements import Requirement" in
environment.py introduces an undeclared runtime dependency; either add
"packaging" to pyproject.toml dependencies or remove the import and replace its
usage with a stdlib or vendored implementation. If you choose to declare the
dependency, add packaging to the project dependencies and run lock/install. If
you choose to avoid the dependency, implement a minimal Requirement
parser/validator (or vendor packaging.requirements.Requirement behavior) used by
functions in environment.py (search for references to Requirement and any code
that parses requirement strings) and remove the external import. Ensure all call
sites that expect Requirement objects are updated to use the new local
implementation or the declared packaging package.
In `@pyisolate/_internal/host.py`:
- Line 401: The inline comment on the parameter allow_gpu=True should not
mention ComfyUI; update the comment to a neutral description (e.g., "Default to
allowing GPU" or remove the comment entirely) where the parameter allow_gpu is
defined or passed so the code follows the guideline of avoiding ComfyUI
references.
- Around line 43-65: The deduplication currently computes hashlib.sha256 for
every log record in _DeduplicationFilter.filter which is expensive; replace the
heavy digest with Python's built-in hash of the message (use msg_hash =
hash(msg_content)) and adjust last_seen to map ints to floats (last_seen:
dict[int, float]); remove the hashlib usage/import and keep the same timeout
logic, cleanup and suppression behavior in filter and use the integer hash as
the deduplication key.
- Around line 424-426: Remove the commented-out debug logging line left above
the subprocess launch (the commented line starting with "#
logger.error(f\"[BWRAP-DEBUG] Final subprocess.Popen args: {cmd}\")") in
pyisolate._internal.host.py; either delete it entirely or replace it with an
appropriate logger.debug(...) call so there is no commented-out code left in the
launch/process section (the block marked "# Launch process") and logging uses
the proper logging level.
- Around line 81-86: The TypedDict ExtensionConfig currently declares
share_cuda_ipc as required but the runtime in host.py treats it as optional;
update the type to NotRequired[bool] on ExtensionConfig so the type contract
matches the code (use typing_extensions.NotRequired or typing.NotRequired
depending on supported Python versions and add the import), or alternatively
remove the runtime presence check around "share_cuda_ipc" in host.py if you
intend it to be required — reference the ExtensionConfig TypedDict and the
share_cuda_ipc key when making the change.
In `@pyisolate/_internal/model_serialization.py`:
- Around line 24-25: Remove the dead typing-only block that does nothing: delete
the "if TYPE_CHECKING: # pragma: no cover - typing aids" block (including the
subsequent "pass # type: ignore[import-not-found]") from
pyisolate/_internal/model_serialization.py so the module no longer contains a
no-op TYPE_CHECKING guard; ensure no other TYPE_CHECKING uses are affected and
run tests/linters.
- Around line 88-97: The current RemoteObjectHandle resolution swallows
exceptions (contextlib.suppress and bare except) so failures are invisible;
update the block in the RemoteObjectHandle branch to (1) remove
contextlib.suppress and instead catch exceptions around setting
resolved._pyisolate_remote_handle so any attribute-assignment errors are logged
at warning level, and (2) catch exceptions from extension.get_remote_object but
log the exception (logger.warning or logging.getLogger(__name__).warning with
exception info) before returning the original handle; keep the existing return
behavior but ensure exceptions are logged so resolution failures are observable.
In `@pyisolate/_internal/tensor_serializer.py`:
- Line 93: The class attribute dest = ("TensorKeeper",) on TensorSerializer is
undocumented—either add an inline comment explaining its purpose (e.g., used for
RPC routing/registration target for TensorKeeper) and what shape/value is
expected, or remove the attribute if it's unused; search for references to dest
or attribute access on TensorSerializer/TensorKeeper to confirm usage and then
either annotate the line with a concise comment or delete the attribute and
update any registration/routing code that relied on it (look for usages of
"dest" and the class name TensorSerializer to locate dependent code).
- Around line 104-106: Replace the f-string in the logger.debug call inside
TensorKeeper with lazy %-style logging so the interpolation only happens when
the debug level is enabled; specifically change the call to use a format string
with %s placeholders and pass t.shape, len(self._keeper), and id(t) as separate
arguments to logger.debug (reference the existing logger.debug invocation and
the TensorKeeper context).
- Around line 200-202: Remove the redundant "import logging" and the local
"logger = logging.getLogger(__name__)" inside the function in
pyisolate._internal.tensor_serializer; instead use the module-level logger
already defined (logger) to avoid shadowing and duplicated imports, so delete
those two lines and reference the existing logger variable where needed.
In `@pyisolate/_internal/torch_gate.py`:
- Around line 13-18: Replace the broad except in the torch import block so only
import errors are swallowed: catch ImportError (and optionally
ModuleNotFoundError) for the importlib.import_module calls for "torch" and
"torch.multiprocessing.reductions" and return (None, None) in that case, but do
not hide other exceptions—either let other exceptions propagate (re-raise) or
log and re-raise so unexpected errors inside torch's initialization aren’t
silently swallowed.
In `@pyisolate/_internal/uds_client.py`:
- Around line 127-132: The _async_uds_entrypoint signature uses Any for
transport and extension_type; replace those with concrete types: change
transport: Any to transport: JSONSocketTransport (or a JSONSocketTransport
protocol/interface) and change extension_type: type[Any] to extension_type:
type[ExtensionBase]; import or reference JSONSocketTransport and ExtensionBase
(or their protocol names) from their modules and ensure ExtensionConfig remains
as-is; update any related runtime checks/typing imports so the function
signature uses these concrete types instead of Any.
- Around line 231-245: The code starts the RPC loop with rpc.run() but if module
loading (module_spec / importlib.util.module_from_spec /
module_spec.loader.exec_module) or extension.on_module_loaded raises, rpc is
never stopped; add a finally block after the try/except that always calls the
RPC shutdown/cleanup API (e.g., rpc.stop(), rpc.shutdown(), or
rpc.run_until_stopped cancellation depending on the RPC implementation) to
ensure rpc.run() is terminated and resources/transport are cleaned up; keep the
existing asyncio.CancelledError handling and re-raise the original exception
after ensuring rpc has been cleaned up.
- Around line 56-71: The code opens a socket (client_sock) and calls
client_sock.connect(...) without a timeout and without closing the socket on
failure, which leaks file descriptors and can hang; modify the connect logic in
the TCP branch (where uds_address.startswith("tcp://")) and the Unix Domain
Socket branch to set a short connect timeout (e.g.,
client_sock.settimeout(CONNECT_TIMEOUT)) before calling connect, and wrap the
connect call in try/except to close client_sock on any exception (re-raising or
converting to a RuntimeError as appropriate); update any constants or function
parameters to provide CONNECT_TIMEOUT and ensure client_sock.settimeout(None) or
socket.setdefaulttimeout restored after successful connect if needed.
- Around line 224-229: Replace the two assert checks after calling
importlib.util.spec_from_file_location with explicit runtime errors: check that
module_spec is not None and that module_spec.loader is not None, and if either
is false raise a descriptive ImportError (or RuntimeError) mentioning
sys_module_name and module_path; this ensures code in uds_client.py (the
module_spec from importlib.util.spec_from_file_location and subsequent call to
importlib.util.module_from_spec) fails with a clear message instead of being
stripped by -O and causing a confusing AttributeError.
- Around line 168-172: The code uses an assert to check torch is present before
using torch.inference_mode() (context: ContextManager[Any] = nullcontext(), if
config.get("share_torch", False): assert torch is not None), which is unsafe
because asserts can be stripped; replace the assert with an explicit runtime
check that raises a clear exception (e.g., RuntimeError or ImportError) if torch
is None, and only then set context = cast(ContextManager[Any],
torch.inference_mode()) so the failure is deterministic at runtime.
- Around line 99-114: The code currently swallows resolution errors for
extension_type_ref and silently falls back to ExtensionBase; change the error
path so that only the explicit default string "pyisolate.shared.ExtensionBase"
is allowed to resolve to ExtensionBase silently—if
bootstrap_data["extension_type_ref"] (ext_type_ref) is anything else and
resolution fails (either because parts.rsplit yields unexpected form or
import/getattr raises), log an error and raise an exception instead of assigning
ExtensionBase; keep the existing behavior that when ext_type_ref equals the
default it resolves to ExtensionBase, but for any other explicit ref
propagate/raise the original ImportError/AttributeError (or raise a new
ValueError with context) rather than silently overriding, and update the except
block currently calling logger.warning to perform that raise.
In `@README.md`:
- Around line 315-366: The README example imports ProxiedSingleton from an
internal module; change the import to use the public API instead (replace "from
pyisolate._internal.shared import ProxiedSingleton" with the public export,
e.g., "from pyisolate import ProxiedSingleton" or "from pyisolate.shared import
ProxiedSingleton") so the example in the MockHostAdapter/IsolationAdapter
snippet and related references (ProxiedSingleton, MockHostAdapter,
IsolationAdapter, get_path_config, provide_rpc_services) only use public-facing
symbols.
- Around line 24-25: Update the README entry for the environment variable
PYISOLATE_MODULE_PATH to remove the ComfyUI-specific language: replace "Used to
detect ComfyUI root directory." with a generic description such as "Used by
adapters to detect the extension/module root directory" or "Used to help
adapters detect the extension's root directory." Ensure the variable name
`PYISOLATE_MODULE_PATH` remains and the wording is generic and applicable to any
adapter, removing any mention of ComfyUI or "comfy."
- Line 5: Replace the ComfyUI-specific wording in the "Fail Loud Policy"
sentence: find the paragraph headed "Fail Loud Policy" that currently reads
"pyisolate assumes the rest of ComfyUI core is correct" and change it to a
generic statement (e.g., "pyisolate assumes the broader runtime environment and
any integrating systems are correctly configured") so the README no longer
references ComfyUI or "comfy" outside of motivational context; ensure the
revised sentence preserves the intent about raising descriptive exceptions on
missing prerequisites or runtime failures.
In `@tests/harness/host.py`:
- Around line 98-105: The setup() method mutates os.environ["PATH"] (using
venv_bin) but the original value is never restored in cleanup(), causing
environment leakage; modify setup() to save the original PATH (and any other env
vars you may modify, e.g., TMPDIR) into an instance attribute like
self._saved_env or self._orig_path, and update cleanup() to restore those saved
values (remove keys that were added and reset values that were changed) so that
PATH and other modified environment variables are returned to their prior state
when cleanup() runs.
| Reinstalling could resolve to different versions, breaking the share_torch | ||
| contract. This is a correctness requirement, not a performance optimization. | ||
| """ | ||
| from packaging.requirements import Requirement |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check line 174 in the environment.py file
head -n 180 pyisolate/_internal/environment.py | tail -n 20 -nRepository: Comfy-Org/pyisolate
Length of output: 143
🏁 Script executed:
# Check if packaging is declared as a dependency in pyproject.toml
cat pyproject.toml | grep -A 20 "dependencies"Repository: Comfy-Org/pyisolate
Length of output: 870
🏁 Script executed:
# Look at the full context around line 174 to understand usage
sed -n '160,210p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 2132
🏁 Script executed:
# Search for all imports from packaging in the codebase
rg "from packaging|import packaging" pyisolate/Repository: Comfy-Org/pyisolate
Length of output: 148
🏁 Script executed:
# Check if packaging is a known transitive dependency of uv
# First, let's see what uv's dependencies are by checking if there's any lock file
fd -e lock.json -e lock.txt -e requirements.lock
# Also check if there's a uv.lock file
ls -la uv.lock 2>/dev/null || echo "No uv.lock found"
# Try to find dependency information
fd -e pyproject.toml -e setup.py | head -5Repository: Comfy-Org/pyisolate
Length of output: 79
🏁 Script executed:
# Let's check if there are any existing runtime tests or CI that would catch missing dependencies
rg "pip install|requirements" .github/ 2>/dev/null | head -20Repository: Comfy-Org/pyisolate
Length of output: 1577
🏁 Script executed:
# Check if packaging is mentioned anywhere in tests or documentation
rg "packaging" --include="*.md" --include="*.rst" .Repository: Comfy-Org/pyisolate
Length of output: 504
🏁 Script executed:
# Check if packaging is referenced in documentation or comments
rg "packaging" -type=py pyisolate/Repository: Comfy-Org/pyisolate
Length of output: 93
🏁 Script executed:
# Let's see if there's a way to understand what uv depends on
# Check the actual runtime behavior - look for any documentation about dependencies
cat README.md | head -100Repository: Comfy-Org/pyisolate
Length of output: 3844
🏁 Script executed:
# Check if there are any imports or version checks related to packaging
rg -n "import" pyisolate/_internal/environment.py | head -20Repository: Comfy-Org/pyisolate
Length of output: 764
🏁 Script executed:
# Check what packaging.requirements.Requirement does and if stdlib has alternatives
# Look at the actual usage in the code
sed -n '195,210p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 657
🏁 Script executed:
# See if there are any comments or documentation about this dependency choice
rg -B5 -A5 "packaging.requirements" pyisolate/Repository: Comfy-Org/pyisolate
Length of output: 1102
🏁 Script executed:
# Check if packaging is commonly installed with uv or if it's expected to be present
# Let's look at setup.py or any other dependency specs
find . -name "*.txt" -o -name "setup.py" | head -10Repository: Comfy-Org/pyisolate
Length of output: 73
🏁 Script executed:
# Verify what the actual issue is: whether packaging is available at runtime
# by checking if there are integration tests that would catch a missing dependency
rg "Requirement" pyisolate/Repository: Comfy-Org/pyisolate
Length of output: 222
🏁 Script executed:
# Check setup.py to see if dependencies are managed there
cat setup.pyRepository: Comfy-Org/pyisolate
Length of output: 302
🏁 Script executed:
# Let's verify if packaging is actually a transitive dependency of uv
# by checking if there's documentation or we can infer from the build
python3 << 'EOF'
import sys
try:
import packaging
print(f"packaging is available at: {packaging.__file__}")
print(f"packaging version: {packaging.__version__}")
except ImportError:
print("packaging is NOT available in the current environment")
EOFRepository: Comfy-Org/pyisolate
Length of output: 175
🏁 Script executed:
# Check if uv declares packaging as an exposed dependency
# by looking at what would be installed if we install pyisolate
rg "packaging" pyproject.tomlRepository: Comfy-Org/pyisolate
Length of output: 285
🏁 Script executed:
# Let's verify if this would actually fail for a user
# by checking if there's a try-except around the import or if it's protected
sed -n '163,180p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 285
🏁 Script executed:
# Check the full function context to see if there's any error handling
sed -n '163,220p' pyisolate/_internal/environment.pyRepository: Comfy-Org/pyisolate
Length of output: 285
packaging is an undeclared runtime dependency — violates zero-runtime-dependencies guideline.
from packaging.requirements import Requirement (line 174) introduces a dependency on the packaging library, which is not declared in pyproject.toml. Per the coding guidelines, this must be "a pure Python library" with "zero runtime dependencies." Either declare packaging as an explicit dependency, replace with a stdlib-based requirement parser, or vendor the necessary functionality.
🤖 Prompt for AI Agents
In `@pyisolate/_internal/environment.py` at line 174, The import "from
packaging.requirements import Requirement" in environment.py introduces an
undeclared runtime dependency; either add "packaging" to pyproject.toml
dependencies or remove the import and replace its usage with a stdlib or
vendored implementation. If you choose to declare the dependency, add packaging
to the project dependencies and run lock/install. If you choose to avoid the
dependency, implement a minimal Requirement parser/validator (or vendor
packaging.requirements.Requirement behavior) used by functions in environment.py
(search for references to Requirement and any code that parses requirement
strings) and remove the external import. Ensure all call sites that expect
Requirement objects are updated to use the new local implementation or the
declared packaging package.
| text=True, | ||
| bufsize=1, | ||
| ) as proc: | ||
| assert proc.stdout is not None |
There was a problem hiding this comment.
assert in production code — will be stripped with -O.
assert proc.stdout is not None on line 349 is used to guard a value that is critical for correctness. If Python runs with optimizations (-O flag), asserts are removed, and the subsequent for line in proc.stdout would fail with a TypeError. Use an explicit if/raise instead.
Proposed fix
- assert proc.stdout is not None
+ if proc.stdout is None:
+ raise RuntimeError("subprocess stdout is unexpectedly None")🤖 Prompt for AI Agents
In `@pyisolate/_internal/environment.py` at line 349, The code uses an assert to
check proc.stdout (`assert proc.stdout is not None`) which is unsafe in
production since asserts are removed with -O; replace this with an explicit
runtime check such as `if proc.stdout is None: raise RuntimeError("expected
subprocess stdout but got None")` (or a more specific exception) before the `for
line in proc.stdout` loop so the code fails deterministically and with a clear
message; update any surrounding code in the same function that assumes
proc.stdout is non-None (reference symbol: proc and proc.stdout).
| class _DeduplicationFilter(logging.Filter): | ||
| def __init__(self, timeout_seconds: int = 10): | ||
| super().__init__() | ||
| self.timeout = timeout_seconds | ||
| self.last_seen: dict[str, float] = {} | ||
|
|
||
| # Basic validation for common injection patterns | ||
| # Note: We allow < and > as they're used in version specifiers | ||
| dangerous_patterns = ["&&", "||", ";", "|", "`", "$", "\n", "\r", "\0"] | ||
| for pattern in dangerous_patterns: | ||
| if pattern in dep: | ||
| raise ValueError( | ||
| f"Invalid dependency '{dep}'. Contains potentially dangerous character: '{pattern}'" | ||
| ) | ||
| def filter(self, record: logging.LogRecord) -> bool: | ||
| import time | ||
|
|
||
| msg_content = record.getMessage() | ||
| msg_hash = hashlib.sha256(msg_content.encode("utf-8")).hexdigest() | ||
| now = time.time() | ||
|
|
||
| def validate_path_within_root(path: Path, root: Path) -> None: | ||
| """Ensure a path is within the expected root directory.""" | ||
| try: | ||
| # Resolve both paths to absolute paths | ||
| resolved_path = path.resolve() | ||
| resolved_root = root.resolve() | ||
|
|
||
| # Check if the path is within the root | ||
| resolved_path.relative_to(resolved_root) | ||
| except ValueError as err: | ||
| raise ValueError(f"Path '{path}' is not within the expected root directory '{root}'") from err | ||
|
|
||
|
|
||
| @contextmanager | ||
| def environment(**env_vars): | ||
| """Context manager for temporarily setting environment variables""" | ||
| original = {} | ||
|
|
||
| # Save original values and set new ones | ||
| for key, value in env_vars.items(): | ||
| original[key] = os.environ.get(key) | ||
| os.environ[key] = str(value) | ||
|
|
||
| try: | ||
| yield | ||
| finally: | ||
| # Restore original values | ||
| for key, value in original.items(): | ||
| if value is None: | ||
| os.environ.pop(key, None) | ||
| else: | ||
| os.environ[key] = value | ||
| if msg_hash in self.last_seen and now - self.last_seen[msg_hash] < self.timeout: | ||
| return False # Suppress duplicate | ||
|
|
||
| self.last_seen[msg_hash] = now | ||
|
|
||
| if len(self.last_seen) > 1000: | ||
| cutoff = now - self.timeout | ||
| self.last_seen = {k: v for k, v in self.last_seen.items() if v > cutoff} | ||
|
|
||
| return True |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
SHA256 for log dedup is unnecessarily expensive.
Using hashlib.sha256 on every log record for deduplication is heavy. A simpler hash(msg_content) (Python's built-in hash) would be sufficient for dedup within a short time window and significantly cheaper on a hot logging path.
Proposed fix
- msg_content = record.getMessage()
- msg_hash = hashlib.sha256(msg_content.encode("utf-8")).hexdigest()
+ msg_content = record.getMessage()
+ msg_hash = hash(msg_content)🤖 Prompt for AI Agents
In `@pyisolate/_internal/host.py` around lines 43 - 65, The deduplication
currently computes hashlib.sha256 for every log record in
_DeduplicationFilter.filter which is expensive; replace the heavy digest with
Python's built-in hash of the message (use msg_hash = hash(msg_content)) and
adjust last_seen to map ints to floats (last_seen: dict[int, float]); remove the
hashlib usage/import and keep the same timeout logic, cleanup and suppression
behavior in filter and use the integer hash as the deduplication key.
| if "share_cuda_ipc" not in config: | ||
| # Default to True ONLY if supported and sharing torch | ||
| ipc_supported, _ = probe_cuda_ipc_support() | ||
| config["share_cuda_ipc"] = force_ipc or (config.get("share_torch", False) and ipc_supported) | ||
| elif force_ipc: | ||
| config["share_cuda_ipc"] = True |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -nA 30 "class ExtensionConfig" pyisolate/config.pyRepository: Comfy-Org/pyisolate
Length of output: 1325
🏁 Script executed:
rg -nA 50 "class ExtensionConfig" pyisolate/config.py | head -60Repository: Comfy-Org/pyisolate
Length of output: 1419
🏁 Script executed:
sed -n '39,/^class /p' pyisolate/config.py | head -100Repository: Comfy-Org/pyisolate
Length of output: 1320
🏁 Script executed:
cat -n pyisolate/config.py | grep -A 50 "^ 39"Repository: Comfy-Org/pyisolate
Length of output: 1551
🏁 Script executed:
cat -n pyisolate/config.py | sed -n '70,100p'Repository: Comfy-Org/pyisolate
Length of output: 164
🏁 Script executed:
cat -n pyisolate/config.py | sed -n '71,80p'Repository: Comfy-Org/pyisolate
Length of output: 133
🏁 Script executed:
cat -n pyisolate/config.py | sed -n '70,85p'Repository: Comfy-Org/pyisolate
Length of output: 164
🏁 Script executed:
wc -l pyisolate/config.py && cat -n pyisolate/config.py | sed -n '71,90p'Repository: Comfy-Org/pyisolate
Length of output: 156
🏁 Script executed:
cat -n pyisolate/_internal/host.py | sed -n '75,95p'Repository: Comfy-Org/pyisolate
Length of output: 1022
🏁 Script executed:
rg -n "ExtensionProxy\(" pyisolate/ -B 5 -A 3 | head -80Repository: Comfy-Org/pyisolate
Length of output: 45
🏁 Script executed:
rg -n "share_cuda_ipc" pyisolate/ -B 3 -A 3Repository: Comfy-Org/pyisolate
Length of output: 2206
🏁 Script executed:
cat -n pyisolate/_internal/host.py | sed -n '70,80p'Repository: Comfy-Org/pyisolate
Length of output: 408
🏁 Script executed:
rg -n "config\s*=" pyisolate/_internal/host.py | head -20Repository: Comfy-Org/pyisolate
Length of output: 353
🏁 Script executed:
rg -n "Extension\(" pyisolate/ -B 3 -A 3 | grep -v testRepository: Comfy-Org/pyisolate
Length of output: 1648
🏁 Script executed:
cat -n pyisolate/host.py | sed -n '35,55p'Repository: Comfy-Org/pyisolate
Length of output: 1040
🏁 Script executed:
rg -n "ExtensionConfig\s*=" pyisolate/ -A 2 | head -40Repository: Comfy-Org/pyisolate
Length of output: 285
Align share_cuda_ipc field with runtime behavior by using NotRequired in ExtensionConfig.
ExtensionConfig defines share_cuda_ipc: bool as a required field (no total=False, no NotRequired). However, the code at lines 81-86 checks if "share_cuda_ipc" not in config, treating it as optional. This creates a mismatch between the type contract and runtime behavior. Either mark share_cuda_ipc as NotRequired[bool] in the ExtensionConfig TypedDict to match the runtime check, or remove the check entirely if all callers are expected to provide this field.
🤖 Prompt for AI Agents
In `@pyisolate/_internal/host.py` around lines 81 - 86, The TypedDict
ExtensionConfig currently declares share_cuda_ipc as required but the runtime in
host.py treats it as optional; update the type to NotRequired[bool] on
ExtensionConfig so the type contract matches the code (use
typing_extensions.NotRequired or typing.NotRequired depending on supported
Python versions and add the import), or alternatively remove the runtime
presence check around "share_cuda_ipc" in host.py if you intend it to be
required — reference the ExtensionConfig TypedDict and the share_cuda_ipc key
when making the change.
| venv_path=str(self.venv_path), | ||
| uds_address=ipc_address, | ||
| sandbox_config=cast(dict[str, Any], sandbox_config), | ||
| allow_gpu=True, # Default to allowing GPU for ComfyUI nodes |
There was a problem hiding this comment.
Remove ComfyUI-specific reference from code comment.
allow_gpu=True, # Default to allowing GPU for ComfyUI nodes — per coding guidelines, pyisolate must not reference ComfyUI except when documenting motivation. This is a runtime code comment, not a design motivation note.
Proposed fix
- allow_gpu=True, # Default to allowing GPU for ComfyUI nodes
+ allow_gpu=True, # Default to allowing GPU access for extensionsAs per coding guidelines, "ComfyUI and 'comfy' should ONLY be referenced in the context of documenting the motivation behind certain design decisions."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| allow_gpu=True, # Default to allowing GPU for ComfyUI nodes | |
| allow_gpu=True, # Default to allowing GPU access for extensions |
🤖 Prompt for AI Agents
In `@pyisolate/_internal/host.py` at line 401, The inline comment on the parameter
allow_gpu=True should not mention ComfyUI; update the comment to a neutral
description (e.g., "Default to allowing GPU" or remove the comment entirely)
where the parameter allow_gpu is defined or passed so the code follows the
guideline of avoiding ComfyUI references.
| rpc.run() | ||
|
|
||
| try: | ||
| module = importlib.util.module_from_spec(module_spec) | ||
| sys.modules[sys_module_name] = module | ||
| module_spec.loader.exec_module(module) | ||
| await extension.on_module_loaded(module) | ||
| await rpc.run_until_stopped() | ||
| except asyncio.CancelledError: | ||
| pass | ||
| except Exception as exc: | ||
| logger.error( | ||
| "Extension module loading/execution failed for %s: %s", module_path, exc, exc_info=True | ||
| ) | ||
| raise |
There was a problem hiding this comment.
No RPC shutdown on error paths.
rpc.run() is called on line 231, but if module_from_spec / exec_module / on_module_loaded raises on lines 234–237, the exception propagates without stopping the RPC. This could leave background tasks or the transport in a dirty state. Consider a finally block that calls an RPC shutdown/cleanup method.
🤖 Prompt for AI Agents
In `@pyisolate/_internal/uds_client.py` around lines 231 - 245, The code starts
the RPC loop with rpc.run() but if module loading (module_spec /
importlib.util.module_from_spec / module_spec.loader.exec_module) or
extension.on_module_loaded raises, rpc is never stopped; add a finally block
after the try/except that always calls the RPC shutdown/cleanup API (e.g.,
rpc.stop(), rpc.shutdown(), or rpc.run_until_stopped cancellation depending on
the RPC implementation) to ensure rpc.run() is terminated and
resources/transport are cleaned up; keep the existing asyncio.CancelledError
handling and re-raise the original exception after ensuring rpc has been cleaned
up.
| **Run Python extensions in isolated virtual environments with seamless inter-process communication.** | ||
|
|
||
| > ⚠️ **Warning**: This library is currently in active development and the API may change. While the core functionality is working, it should not be considered stable for production use yet. | ||
| > 🚨 **Fail Loud Policy**: pyisolate assumes the rest of ComfyUI core is correct. Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored. |
There was a problem hiding this comment.
Remove ComfyUI-specific language from the library README.
The fail-loud policy statement says "pyisolate assumes the rest of ComfyUI core is correct." This couples the library's documentation to a specific downstream consumer. The policy should be stated generically.
Proposed fix
-> 🚨 **Fail Loud Policy**: pyisolate assumes the rest of ComfyUI core is correct. Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored.
+> 🚨 **Fail Loud Policy**: Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored.As per coding guidelines, "While this library is primarily created for use with ComfyUI, it MUST NOT depend on ComfyUI or any of its packages" and "ComfyUI and 'comfy' should ONLY be referenced in the context of documenting the motivation behind certain design decisions."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| > 🚨 **Fail Loud Policy**: pyisolate assumes the rest of ComfyUI core is correct. Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored. | |
| > 🚨 **Fail Loud Policy**: Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored. |
🤖 Prompt for AI Agents
In `@README.md` at line 5, Replace the ComfyUI-specific wording in the "Fail Loud
Policy" sentence: find the paragraph headed "Fail Loud Policy" that currently
reads "pyisolate assumes the rest of ComfyUI core is correct" and change it to a
generic statement (e.g., "pyisolate assumes the broader runtime environment and
any integrating systems are correctly configured") so the README no longer
references ComfyUI or "comfy" outside of motivational context; ensure the
revised sentence preserves the intent about raising descriptive exceptions on
missing prerequisites or runtime failures.
| - **`PYISOLATE_HOST_SNAPSHOT`**: Path to JSON file containing the host's `sys.path` and environment variables. Used during child process initialization. | ||
| - **`PYISOLATE_MODULE_PATH`**: Path to the extension module being loaded. Used to detect ComfyUI root directory. |
There was a problem hiding this comment.
Remove ComfyUI-specific description from environment variable documentation.
Line 25 describes PYISOLATE_MODULE_PATH as "Used to detect ComfyUI root directory." This is a ComfyUI-specific concern that should not appear in the generic library documentation. Describe the variable's purpose in generic terms (e.g., "Path to the extension module being loaded. Used by adapters for root directory detection.").
As per coding guidelines, "ComfyUI and 'comfy' should ONLY be referenced in the context of documenting the motivation behind certain design decisions."
🤖 Prompt for AI Agents
In `@README.md` around lines 24 - 25, Update the README entry for the environment
variable PYISOLATE_MODULE_PATH to remove the ComfyUI-specific language: replace
"Used to detect ComfyUI root directory." with a generic description such as
"Used by adapters to detect the extension/module root directory" or "Used to
help adapters detect the extension's root directory." Ensure the variable name
`PYISOLATE_MODULE_PATH` remains and the wording is generic and applicable to any
adapter, removing any mention of ComfyUI or "comfy."
| ## Implementing a Host Adapter (IsolationAdapter) | ||
|
|
||
| When integrating pyisolate with your application (like ComfyUI), you implement the `IsolationAdapter` protocol. This tells pyisolate how to configure isolated processes for your environment. | ||
|
|
||
| ### Reference Implementation | ||
|
|
||
| The canonical example is in `tests/fixtures/test_adapter.py`: | ||
|
|
||
| ```python | ||
| from pyisolate.interfaces import IsolationAdapter | ||
| from pyisolate._internal.shared import ProxiedSingleton | ||
|
|
||
| class MockHostAdapter(IsolationAdapter): | ||
| """Reference adapter showing all protocol methods.""" | ||
|
|
||
| @property | ||
| def identifier(self) -> str: | ||
| """Return unique adapter identifier (e.g., 'comfyui').""" | ||
| return "myapp" | ||
|
|
||
| def get_path_config(self, module_path: str) -> dict: | ||
| """Configure sys.path for isolated extensions. | ||
|
|
||
| Returns: | ||
| - preferred_root: Your app's root directory | ||
| - additional_paths: Extra paths for imports | ||
| """ | ||
| return { | ||
| "preferred_root": "/path/to/myapp", | ||
| "additional_paths": ["/path/to/myapp/extensions"], | ||
| } | ||
|
|
||
| def setup_child_environment(self, snapshot: dict) -> None: | ||
| """Configure child process after sys.path reconstruction.""" | ||
| pass # Set up logging, environment, etc. | ||
|
|
||
| def register_serializers(self, registry) -> None: | ||
| """Register custom type serializers for RPC transport.""" | ||
| registry.register( | ||
| "MyCustomType", | ||
| serializer=lambda obj: {"data": obj.data}, | ||
| deserializer=lambda d: MyCustomType(d["data"]), | ||
| ) | ||
|
|
||
| def provide_rpc_services(self) -> list: | ||
| """Return ProxiedSingleton classes to expose via RPC.""" | ||
| return [MyRegistry, MyProgressReporter] | ||
|
|
||
| def handle_api_registration(self, api, rpc) -> None: | ||
| """Post-registration hook for API-specific setup.""" | ||
| pass | ||
| ``` |
There was a problem hiding this comment.
Documentation references internal module pyisolate._internal.shared.
Line 325 shows from pyisolate._internal.shared import ProxiedSingleton in a public-facing README example. Internal modules (_internal/) should not be exposed in documentation. Use the public import path instead (e.g., from pyisolate import ProxiedSingleton or from pyisolate.shared import ProxiedSingleton).
As per coding guidelines, "Documentation should NEVER include references to internal implementation details unless absolutely necessary for understanding the public API."
🤖 Prompt for AI Agents
In `@README.md` around lines 315 - 366, The README example imports
ProxiedSingleton from an internal module; change the import to use the public
API instead (replace "from pyisolate._internal.shared import ProxiedSingleton"
with the public export, e.g., "from pyisolate import ProxiedSingleton" or "from
pyisolate.shared import ProxiedSingleton") so the example in the
MockHostAdapter/IsolationAdapter snippet and related references
(ProxiedSingleton, MockHostAdapter, IsolationAdapter, get_path_config,
provide_rpc_services) only use public-facing symbols.
| def setup(self): | ||
| """Initialize the host environment.""" | ||
| # Ensure uv is in PATH | ||
| # Since we run tests with the venv python, uv should be in the same bin dir | ||
| venv_bin = os.path.dirname(sys.executable) | ||
| path = os.environ.get("PATH", "") | ||
| if venv_bin not in path.split(os.pathsep): | ||
| os.environ["PATH"] = f"{venv_bin}{os.pathsep}{path}" |
There was a problem hiding this comment.
setup() modifies PATH without restoring it during cleanup.
os.environ["PATH"] is modified at line 105 but never restored in cleanup(). Combined with the TMPDIR issue, this harness leaks multiple environment variables across tests. Consider saving and restoring all modified env vars in cleanup().
Proposed fix sketch
+ def __init__(self, use_temp_dir: bool = True):
+ self._saved_env: dict[str, str | None] = {}
...
def setup(self):
...
venv_bin = os.path.dirname(sys.executable)
path = os.environ.get("PATH", "")
if venv_bin not in path.split(os.pathsep):
+ self._saved_env.setdefault("PATH", path)
os.environ["PATH"] = f"{venv_bin}{os.pathsep}{path}"
...
async def cleanup(self):
...
+ # Restore environment
+ for key, val in self._saved_env.items():
+ if val is None:
+ os.environ.pop(key, None)
+ else:
+ os.environ[key] = val🤖 Prompt for AI Agents
In `@tests/harness/host.py` around lines 98 - 105, The setup() method mutates
os.environ["PATH"] (using venv_bin) but the original value is never restored in
cleanup(), causing environment leakage; modify setup() to save the original PATH
(and any other env vars you may modify, e.g., TMPDIR) into an instance attribute
like self._saved_env or self._orig_path, and update cleanup() to restore those
saved values (remove keys that were added and reset values that were changed) so
that PATH and other modified environment variables are returned to their prior
state when cleanup() runs.
|
Final call before merge on pyisolate PR3: #3 These latest additions include:
Checks are green on my side and this appears merge-ready. Also: requesting reviewer/merge access on Comfy-Org/pyisolate (happy to continue via fork PRs until then). |
PyIsolate Update: Production-Ready Process Isolation for ComfyUI
Major Features
1. Full ComfyUI Integration with Path Unification
Commits:
aee01de,458d11a,846e4cc,8699bfa,029ae2cComfyUI-side:
951a6ba7,a62150a3,230544ac,852251a0,04485daautils.json_utilaccessible in all children2. Lazy Loading & Caching System
Commits:
b4cdd9aComfyUI-side:
1a20f4ac,fe3b402a3. Windows Compatibility
Commits:
dbca9bbComfyUI-side:
8587e20bExpansions
1. Torch Ecosystem Support
Commits:
99e742e,8d83a4b,4baa626,4027e0cshare_torch=True--index-strategy unsafe-best-match(standarduvpattern for non-release version resolution where strict matching fails)2. Stateless RPC Implementation
Commits:
cd07469,846e4cc,d785fa3ComfyUI-side:
6603f42c,d9a5c36c,54784f333. Tensor Serialization Safety
Commits:
28e7018,a54aa19Polish
1. Logging Deduplication
Commits:
fcb2489,e0540f8ComfyUI-side:
2d1bb755,fe3b402a,c80c1dbb2. Documentation
Commits:
029ae2c,f0452103. Unit Tests
Commits:
465c72b,58ba87dTesting
Baseline test: 3 isolated nodes loading in 0.0s (cached), workflow execution in 0.15s
Command:
/home/johnj/mysolate/debug/run_debug.shResult: ✅ PASS
Statistics
Commit range:
16ddb98..029ae2c(27 commits)Lines changed: 59 files, +9194/-194 lines
Authors: John Pollock (pyisolate), ComfyUI-side integration