Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 0 additions & 81 deletions .coderabbit.yaml

This file was deleted.

10 changes: 1 addition & 9 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,6 @@ jobs:
fail-fast: false
matrix:
include:
- container: debian:11
python-install: |
apt-get update && apt-get install -y python3 python3-pip python3-venv git curl bubblewrap
extras: "dev,test"
- container: debian:12
python-install: |
apt-get update && apt-get install -y python3 python3-pip python3-venv git curl bubblewrap
Expand All @@ -73,10 +69,6 @@ jobs:
python-install: |
dnf install -y python3 python3-pip git curl bubblewrap
extras: "dev,test"
- container: rockylinux:9
python-install: |
dnf install -y python3 python3-pip git bubblewrap
extras: "dev,test"

container: ${{ matrix.container }}

Expand All @@ -92,7 +84,7 @@ jobs:

- name: Install package
run: |
$HOME/.local/bin/uv venv
$HOME/.local/bin/uv venv --python python3
. .venv/bin/activate
$HOME/.local/bin/uv pip install -e ".[${{ matrix.extras }}]"

Expand Down
5 changes: 4 additions & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,10 @@ jobs:
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
if: >-
github.repository == 'Comfy-Org/pyisolate' &&
github.event_name == 'push' &&
github.ref == 'refs/heads/main'
steps:
- name: Deploy to GitHub Pages
id: deployment
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pytorch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:
- name: Run tests
run: |
source .venv/bin/activate
pytest tests/test_integration.py -v -k "torch"
pytest tests/integration_v2/test_tensors.py tests/test_torch_optional_contract.py tests/test_torch_utils_additional.py -v

- name: Test example with PyTorch
run: |
Expand Down Expand Up @@ -100,7 +100,7 @@ jobs:
- name: Run tests
run: |
source .venv/bin/activate
pytest tests/test_integration.py -v -k "torch"
pytest tests/integration_v2/test_tensors.py tests/test_torch_optional_contract.py tests/test_torch_utils_additional.py -v

- name: Test example with PyTorch
run: |
Expand Down
8 changes: 2 additions & 6 deletions .github/workflows/windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ jobs:
strategy:
fail-fast: false
matrix:
pytorch-version: ['2.1.0', '2.3.0']
pytorch-version: ['2.1.0']
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

PyTorch 2.3.0 is no longer tested on Windows.

Reducing the matrix to ['2.1.0'] only means Windows-specific regressions for PyTorch 2.3.x (the current release line at the time of this PR) go undetected. If CI time is a concern, consider keeping 2.3.0 and dropping 2.1.0 instead, since 2.1.x is now EOL.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/windows.yml at line 53, The matrix entry for the CI key
pytorch-version was changed to only ['2.1.0'], which removes Windows coverage
for the current PyTorch release line; restore Windows testing for PyTorch 2.3.x
by updating the pytorch-version matrix to include '2.3.0' (either replace
'2.1.0' with '2.3.0' or make it ['2.3.0', '2.1.0'] if you want both), ensuring
the pytorch-version matrix value is adjusted in the Windows workflow definition.


steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -78,8 +78,4 @@ jobs:
- name: Run PyTorch tests
run: |
.venv\Scripts\activate
python tests/test_integration.py -v
python tests/test_edge_cases.py -v
python tests/test_normalization_integration.py -v
python tests/test_security.py -v
python tests/test_torch_tensor_integration.py -v
pytest tests/integration_v2/test_tensors.py tests/test_torch_optional_contract.py tests/test_torch_utils_additional.py -v
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,6 @@ include README.md
include pyproject.toml
recursive-include pyisolate *.py
recursive-include tests *.py
prune tests/.test_temps
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@
copyright = "2026, Jacob Segal"
author = "Jacob Segal"

version = "0.9.0"
release = "0.9.0"
version = "0.9.1"
release = "0.9.1"

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
24 changes: 17 additions & 7 deletions example/host.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import argparse
import asyncio
import inspect
import logging
import os
import sys
Expand All @@ -9,6 +10,7 @@
from shared import DatabaseSingleton, ExampleExtensionBase

import pyisolate
from pyisolate._internal.sandbox_detect import detect_sandbox_capability
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Example imports from internal _internal module — use public API instead.

detect_sandbox_capability is imported from pyisolate._internal.sandbox_detect, but examples serve as user-facing documentation. As per coding guidelines: "Documentation should NEVER include references to internal implementation details." Either expose detect_sandbox_capability in pyisolate/__init__ and __all__, or handle sandbox mode differently in the example.

Option: export it publicly

In pyisolate/__init__.py:

 from ._internal.tensor_serializer import flush_tensor_keeper, purge_orphan_sender_shm_files
+from ._internal.sandbox_detect import detect_sandbox_capability
 from .config import ExtensionConfig, ExtensionManagerConfig, SandboxMode
 __all__ = [
     ...
+    "detect_sandbox_capability",
     "register_adapter",

Then in example/host.py:

-from pyisolate._internal.sandbox_detect import detect_sandbox_capability
+from pyisolate import detect_sandbox_capability
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@example/host.py` at line 13, The example imports the internal function
detect_sandbox_capability from pyisolate._internal.sandbox_detect; update the
example to use the public API instead by either (A) exporting
detect_sandbox_capability from the package public surface (add it to
pyisolate/__init__.py and include it in __all__) so example/host.py can import
it as from pyisolate import detect_sandbox_capability, or (B) modify
example/host.py to detect or handle sandbox mode using an existing public
function/setting rather than importing from _internal; reference the symbol
detect_sandbox_capability and ensure the example only imports from pyisolate
(not pyisolate._internal).



# ANSI color codes for terminal output (using 256-color mode for better compatibility)
Expand Down Expand Up @@ -47,6 +49,16 @@ async def async_main():
config = pyisolate.ExtensionManagerConfig(venv_root_path=os.path.join(base_path, "extension-venvs"))
manager = pyisolate.ExtensionManager(ExampleExtensionBase, config)

sandbox_mode = pyisolate.SandboxMode.REQUIRED
if sys.platform == "linux":
cap = detect_sandbox_capability()
if not cap.available:
sandbox_mode = pyisolate.SandboxMode.DISABLED
logger.warning(
"Sandbox unavailable in example environment (%s); using sandbox_mode=disabled",
cap.restriction_model,
)

extensions: list[ExampleExtensionBase] = []
extension_dir = os.path.join(base_path, "extensions")
for extension in os.listdir(extension_dir):
Expand Down Expand Up @@ -85,6 +97,7 @@ class CustomConfig(TypedDict):
dependencies=manifest["dependencies"] + pyisolate_install,
apis=[DatabaseSingleton],
share_torch=manifest["share_torch"],
sandbox_mode=sandbox_mode,
)

extension = manager.load_extension(config)
Expand Down Expand Up @@ -118,12 +131,7 @@ class CustomConfig(TypedDict):

# Test Extension 2
ext2_result = await db.get_value("extension2_result")
if (
ext2_result
and ext2_result.get("extension") == "extension2"
and ext2_result.get("array_sum") == 17.5
and ext2_result.get("numpy_version").startswith("2.")
):
if ext2_result and ext2_result.get("extension") == "extension2" and ext2_result.get("array_sum") == 17.5:
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removal of the numpy version check (and ext2_result.get("numpy_version").startswith("2.")) from the Extension2 test assertion could allow the test to pass even when the wrong numpy version is installed. This removes valuable validation that was checking whether Extension2 was actually using numpy 2.x as expected. If this check was removed because it's unreliable or unnecessary, consider adding a comment explaining why, otherwise consider keeping it to maintain test coverage of the dependency isolation.

Suggested change
if ext2_result and ext2_result.get("extension") == "extension2" and ext2_result.get("array_sum") == 17.5:
if (
ext2_result
and ext2_result.get("extension") == "extension2"
and ext2_result.get("array_sum") == 17.5
and ext2_result.get("numpy_version").startswith("2.")
):

Copilot uses AI. Check for mistakes.
test_results.append(("Extension2", "PASSED", "Array processing with numpy 2.x"))
logger.debug(f"Extension2 result: {ext2_result}")
else:
Expand Down Expand Up @@ -169,7 +177,9 @@ class CustomConfig(TypedDict):
# Shutdown extensions
logger.debug("Shutting down extensions...")
for extension in extensions:
await extension.stop()
stop_result = extension.stop()
if inspect.isawaitable(stop_result):
await stop_result
Comment on lines +180 to +182
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

inspect.isawaitable check suggests API uncertainty — example should demonstrate the canonical call pattern.

If extension.stop() is synchronous, just call it. If it's async, await it. The isawaitable guard obscures the intended API usage for readers of the example. Examples are "tested in CI" and should "demonstrate real use cases," per guidelines.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@example/host.py` around lines 180 - 182, The example currently uses
inspect.isawaitable around extension.stop(), which hides the intended API;
change the example to use the canonical async pattern by calling await
extension.stop() directly (and update any example extension implementations to
make stop an async def) so the example consistently demonstrates the async API
for extension.stop(); alternatively, if the intended API is synchronous, remove
the await and call extension.stop() directly—pick one canonical contract and
make extension.stop() implementations and the example match it.


# Exit with appropriate code
if failed_tests > 0:
Expand Down
5 changes: 4 additions & 1 deletion pyisolate/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,14 @@

from ._internal.rpc_protocol import ProxiedSingleton, local_execution
from ._internal.singleton_context import singleton_scope
from ._internal.tensor_serializer import flush_tensor_keeper, purge_orphan_sender_shm_files
from .config import ExtensionConfig, ExtensionManagerConfig, SandboxMode
from .host import ExtensionBase, ExtensionManager

if TYPE_CHECKING:
from .interfaces import IsolationAdapter

__version__ = "0.9.0"
__version__ = "0.9.1"

__all__ = [
"ExtensionBase",
Expand All @@ -53,6 +54,8 @@
"ProxiedSingleton",
"local_execution",
"singleton_scope",
"flush_tensor_keeper",
"purge_orphan_sender_shm_files",
"register_adapter",
"get_adapter",
]
Expand Down
47 changes: 43 additions & 4 deletions pyisolate/_internal/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,9 +173,20 @@ def exclude_satisfied_requirements(
"""
from packaging.requirements import Requirement

result = subprocess.run( # noqa: S603 # Trusted: system pip executable
[str(python_exe), "-m", "pip", "list", "--format", "json"], capture_output=True, text=True, check=True
)
try:
result = subprocess.run( # noqa: S603 # Trusted: system pip executable
[str(python_exe), "-m", "pip", "list", "--format", "json"],
capture_output=True,
text=True,
check=True,
)
except subprocess.CalledProcessError as exc:
# Newer uv versions can create venvs without pip unless seeded.
# If pip is unavailable, skip filtering and install requested deps.
if "No module named pip" in (exc.stderr or ""):
logger.debug("pip unavailable in %s; skipping satisfied-requirement filter", python_exe)
return requirements
raise
installed = {pkg["name"].lower(): pkg["version"] for pkg in json.loads(result.stdout)}
torch_ecosystem = get_torch_ecosystem_packages()

Expand Down Expand Up @@ -227,6 +238,7 @@ def create_venv(venv_path: Path, config: ExtensionConfig) -> None:
uv_path,
"venv",
str(venv_path),
"--seed",
"--python",
sys.executable,
]
Expand Down Expand Up @@ -337,7 +349,34 @@ def install_dependencies(venv_path: Path, config: ExtensionConfig, name: str) ->
except Exception as exc:
logger.debug("Dependency cache read failed: %s", exc)

cmd = cmd_prefix + safe_deps + common_args
install_targets: list[str] = []
i = 0
while i < len(safe_deps):
dep = safe_deps[i]
dep_stripped = dep.strip()

# Support split editable args from existing callers:
# ["-e", "/path/to/pkg"].
if dep_stripped == "-e":
if i + 1 >= len(safe_deps):
raise ValueError("Editable dependency '-e' must include a path or URL")
editable_target = safe_deps[i + 1].strip()
if not editable_target:
raise ValueError("Editable dependency '-e' must include a path or URL")
install_targets.extend(["-e", editable_target])
i += 2
continue

if dep_stripped.startswith("-e "):
editable_target = dep_stripped[3:].strip()
if not editable_target:
raise ValueError("Editable dependency must include a path or URL after '-e'")
install_targets.extend(["-e", editable_target])
else:
Comment on lines +360 to +375
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject option-like editable targets in split/combined -e parsing.

A target that still starts with - should be rejected early; otherwise malformed inputs fail later with less clear install errors.

🛠️ Suggested validation hardening
         if dep_stripped == "-e":
             if i + 1 >= len(safe_deps):
                 raise ValueError("Editable dependency '-e' must include a path or URL")
             editable_target = safe_deps[i + 1].strip()
             if not editable_target:
                 raise ValueError("Editable dependency '-e' must include a path or URL")
+            if editable_target.startswith("-"):
+                raise ValueError(
+                    "Editable dependency target after '-e' cannot start with '-'"
+                )
             install_targets.extend(["-e", editable_target])
             i += 2
             continue

         if dep_stripped.startswith("-e "):
             editable_target = dep_stripped[3:].strip()
             if not editable_target:
                 raise ValueError("Editable dependency must include a path or URL after '-e'")
+            if editable_target.startswith("-"):
+                raise ValueError(
+                    "Editable dependency target after '-e' cannot start with '-'"
+                )
             install_targets.extend(["-e", editable_target])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/environment.py` around lines 360 - 375, The parser
currently accepts editable targets that begin with "-" which later produce
confusing installer errors; update the '-e' handling in the block that processes
safe_deps so that after computing editable_target (both in the dep_stripped ==
"-e" branch where editable_target = safe_deps[i + 1].strip() and in the
dep_stripped.startswith("-e ") branch where editable_target =
dep_stripped[3:].strip()) you immediately validate that editable_target is
non-empty and does not start with "-" and raise a ValueError (with a clear
message like "Editable dependency '-e' must include a valid path or URL, not an
option") if it does; keep existing logic to extend install_targets and advance i
unchanged otherwise.

install_targets.append(dep)
i += 1

cmd = cmd_prefix + install_targets + common_args

with subprocess.Popen( # noqa: S603 # Trusted: validated pip/uv install cmd
cmd,
Expand Down
21 changes: 10 additions & 11 deletions pyisolate/_internal/model_serialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,24 +36,23 @@ def serialize_for_isolation(data: Any) -> Any:
"""
type_name = type(data).__name__

# If this object originated as a RemoteObjectHandle, prefer to send the
# handle back to the isolated process rather than attempting to pickle the
# concrete instance. This preserves identity (and avoids pickling large or
# unpicklable objects) while still allowing host-side consumers to interact
# with the resolved object.
from .remote_handle import RemoteObjectHandle

handle = getattr(data, "_pyisolate_remote_handle", None)
if isinstance(handle, RemoteObjectHandle):
return handle

# Adapter-registered serializers take precedence over built-in handlers
registry = SerializerRegistry.get_instance()
if registry.has_handler(type_name):
serializer = registry.get_serializer(type_name)
if serializer:
return serializer(data)

# If this object originated as a RemoteObjectHandle, send the original
# handle only when no adapter serializer is available for this type.
# This avoids cross-extension stale handle reuse for serializer-backed
# objects (e.g. CLIP/ModelPatcher/VAE refs).
from .remote_handle import RemoteObjectHandle

handle = getattr(data, "_pyisolate_remote_handle", None)
if isinstance(handle, RemoteObjectHandle):
return handle

torch, _ = get_torch_optional()
if torch is not None and isinstance(data, torch.Tensor):
if data.is_cuda:
Expand Down
Loading