
Depth Estimation Privacy Transforms
from deepcamera2,637
Real-time monocular depth estimation skill for AI cameras using Depth Anything v2. Anonymizes scenes with colorized depth maps while preserving spatial layout —
What it does
Adds real-time depth estimation to AI camera feeds using the Depth Anything v2 model. The skill transforms live video frames into colorized depth maps where near objects appear warm and distant objects appear cool. In depth_only mode, it fully anonymizes scenes — hiding all visual identity while preserving spatial layout and activity patterns, making it suitable for privacy-compliant security monitoring.
When to use it
- When a security camera feed needs to anonymize occupants while still detecting activity
- When you want depth-overlay visualizations on live CCTV or webcam feeds
- When running on Apple Silicon and want Neural Engine acceleration (3–5× faster than MPS via CoreML)
- When building privacy-first surveillance or access monitoring systems
What's included
- Instructions: Full protocol spec for frame input/output (JSONL stdin/stdout), config-update commands, and perf_stats events
- Hardware backends: CoreML (Apple Neural Engine) on macOS, PyTorch CUDA/CPU on Linux/Windows
- Interface:
TransformSkillBaseABC withload_modelandtransform_framehooks for custom extensions - Display modes:
depth_only,overlay,side_by_sidewith configurable opacity and colormap
Compatible agents
Designed for the SharpAI Aegis AI camera platform. Integrates as a pluggable transform skill via stdin/stdout JSONL protocol. Compatible with any Python-based agent that can launch a subprocess and pipe frame events.
Tags
Information
- Repository
- deepcamera
- Stars
- 2,637
- Installs
- 0