Skip to content

Develop#152

Open
solderzzc wants to merge 13 commits intomasterfrom
develop
Open

Develop#152
solderzzc wants to merge 13 commits intomasterfrom
develop

Conversation

@solderzzc
Copy link
Member

No description provided.

solderzzc and others added 13 commits March 14, 2026 16:29
…m skills

Introduces a 377-line abstract base class that standardizes the stdin/stdout
JSONL protocol, device selection, config loading (AEGIS_SKILL_PARAMS + CLI +
file), graceful signal handling, and performance telemetry for all transform
skills. New skills subclass TransformSkillBase and implement load_model() and
transform_frame() only.
…t defaults

Refactors depth-estimation skill to subclass TransformSkillBase, reducing
transform.py from ~160 lines of boilerplate to ~100 lines of pure skill logic.

Key changes:
- Default blend_mode changed to 'depth_only' for privacy anonymization
- Version bumped to 1.1.0, category set to 'privacy'
- SKILL.md documents the TransformSkillBase interface for new skill authors
- Protocol updated: frame_id tracking, config-update command, base64 output
- Adds on_config_update() for live parameter changes from Aegis
Adds 'privacy' as a new skill category in skills.json for transforms that
anonymize camera feeds (depth maps, blur, blind mode). Registers the
depth-estimation skill (v1.1.0) with privacy-specific capabilities
(live_transform, privacy_overlay) and UI unlock flags (blind_mode).
…t + pin torch/torchvision versions

- Replace basic _select_device() with full HardwareEnv.detect() from skills/lib/env_config.py
- Supports: NVIDIA CUDA, AMD ROCm, Apple MPS/Neural Engine, Intel OpenVINO/NPU, CPU
- Pin torch~=2.7.0 and torchvision~=0.22.0 to prevent pip resolver conflicts
- Move torch/torchvision above depth-anything-v2 in requirements.txt for install order
- Expose self.env (HardwareEnv) to subclasses for GPU name, memory, backend info
- Include backend and gpu_name in ready event for Aegis UI display
…package

torch.hub.load('LiheYoung/Depth-Anything-V2', ...) returns 404.
Switch to direct DepthAnythingV2 class from depth_anything_v2 pip package
with weights downloaded via huggingface_hub.hf_hub_download (cached).

Tested: model loads successfully on MPS (Apple Silicon).
…LLM tests

- Topic Classification: remove '3-6 words' / 'short phrase' from prompts,
  now just 'Respond with ONLY the topic title'
- Remove word count assertion (wc <= 8) and upper char bounds
- Chat & JSON: remove upper-bound char limits (<2000, <500, <3000)
- Narrative Synthesis: remove <4000 char limit
- Contradictory Instructions: 'under 50 words' -> 'succinct'
- Context Preprocessing: 'brief 1-line summary' -> 'summary'

LLMs perform poorly on fixed word count targets. Validation
assertions for minimum response length and JSON structure preserved.
Was only a transitive dep via gradio/depth-anything-v2, getting dropped
by pip's resolver. Now explicitly required for hf_hub_download.
- Change depth-estimation category from Transformation to Privacy
- Mark depth-estimation as ✅ Ready (was 📐 Planned)
- Add dedicated '🔒 Privacy — Depth Map Anonymization' section
- Link to TransformSkillBase for building new privacy skills
…ompat

The depth-anything-v2 PyPI wheel (0.1.0) declares python_requires>=3.12
but is pure Python (py3-none-any) and works on 3.11+. Updated SKILL.md
setup instructions and added a comment in requirements.txt so the
deployment agent uses the correct pip flags.
fix(depth-estimation): use --ignore-requires-python for Python 3.11 c…
On macOS, loads CoreML .mlpackage from ~/.aegis-ai/models/feature-extraction/
using coremltools (Neural Engine). Auto-downloads from
apple/coreml-depth-anything-v2-small on HuggingFace if not present.

On other platforms, falls back to PyTorch DepthAnythingV2 + hf_hub_download.

Verified: CoreML inference at 65.7ms/frame (~15 FPS) on Apple Silicon.

- requirements.txt: add coremltools>=8.0 (darwin-only platform marker)
- SKILL.md: v1.2.0, hardware backend table, CoreML variant parameter
macOS: installs coremltools + common deps only (fast ~10s),
auto-downloads DepthAnythingV2SmallF16.mlpackage from HF.
Other: full PyTorch stack via requirements.txt.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants