Conversation
…m skills Introduces a 377-line abstract base class that standardizes the stdin/stdout JSONL protocol, device selection, config loading (AEGIS_SKILL_PARAMS + CLI + file), graceful signal handling, and performance telemetry for all transform skills. New skills subclass TransformSkillBase and implement load_model() and transform_frame() only.
…t defaults Refactors depth-estimation skill to subclass TransformSkillBase, reducing transform.py from ~160 lines of boilerplate to ~100 lines of pure skill logic. Key changes: - Default blend_mode changed to 'depth_only' for privacy anonymization - Version bumped to 1.1.0, category set to 'privacy' - SKILL.md documents the TransformSkillBase interface for new skill authors - Protocol updated: frame_id tracking, config-update command, base64 output - Adds on_config_update() for live parameter changes from Aegis
Adds 'privacy' as a new skill category in skills.json for transforms that anonymize camera feeds (depth maps, blur, blind mode). Registers the depth-estimation skill (v1.1.0) with privacy-specific capabilities (live_transform, privacy_overlay) and UI unlock flags (blind_mode).
…t + pin torch/torchvision versions - Replace basic _select_device() with full HardwareEnv.detect() from skills/lib/env_config.py - Supports: NVIDIA CUDA, AMD ROCm, Apple MPS/Neural Engine, Intel OpenVINO/NPU, CPU - Pin torch~=2.7.0 and torchvision~=0.22.0 to prevent pip resolver conflicts - Move torch/torchvision above depth-anything-v2 in requirements.txt for install order - Expose self.env (HardwareEnv) to subclasses for GPU name, memory, backend info - Include backend and gpu_name in ready event for Aegis UI display
…package
torch.hub.load('LiheYoung/Depth-Anything-V2', ...) returns 404.
Switch to direct DepthAnythingV2 class from depth_anything_v2 pip package
with weights downloaded via huggingface_hub.hf_hub_download (cached).
Tested: model loads successfully on MPS (Apple Silicon).
…LLM tests - Topic Classification: remove '3-6 words' / 'short phrase' from prompts, now just 'Respond with ONLY the topic title' - Remove word count assertion (wc <= 8) and upper char bounds - Chat & JSON: remove upper-bound char limits (<2000, <500, <3000) - Narrative Synthesis: remove <4000 char limit - Contradictory Instructions: 'under 50 words' -> 'succinct' - Context Preprocessing: 'brief 1-line summary' -> 'summary' LLMs perform poorly on fixed word count targets. Validation assertions for minimum response length and JSON structure preserved.
Was only a transitive dep via gradio/depth-anything-v2, getting dropped by pip's resolver. Now explicitly required for hf_hub_download.
- Change depth-estimation category from Transformation to Privacy - Mark depth-estimation as ✅ Ready (was 📐 Planned) - Add dedicated '🔒 Privacy — Depth Map Anonymization' section - Link to TransformSkillBase for building new privacy skills
…ompat The depth-anything-v2 PyPI wheel (0.1.0) declares python_requires>=3.12 but is pure Python (py3-none-any) and works on 3.11+. Updated SKILL.md setup instructions and added a comment in requirements.txt so the deployment agent uses the correct pip flags.
fix(depth-estimation): use --ignore-requires-python for Python 3.11 c…
On macOS, loads CoreML .mlpackage from ~/.aegis-ai/models/feature-extraction/ using coremltools (Neural Engine). Auto-downloads from apple/coreml-depth-anything-v2-small on HuggingFace if not present. On other platforms, falls back to PyTorch DepthAnythingV2 + hf_hub_download. Verified: CoreML inference at 65.7ms/frame (~15 FPS) on Apple Silicon. - requirements.txt: add coremltools>=8.0 (darwin-only platform marker) - SKILL.md: v1.2.0, hardware backend table, CoreML variant parameter
macOS: installs coremltools + common deps only (fast ~10s), auto-downloads DepthAnythingV2SmallF16.mlpackage from HF. Other: full PyTorch stack via requirements.txt.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.