Run a tool-driven, RLM (Reinforcement/Repository-Linked Model) security audit for large legacy .NET codebases. This skill orchestrates audit.py to scan repositories without loading entire codebases into the model context, producing a human-readable security_audit_report.md and machine artifacts (metadata and manifest) with file/line evidence and concrete fixes. It includes tuning knobs for planner iterations, token/output bounds, tool payload limits, and runtime timeouts so audits scale to massive repositories.
Use this skill when you need a privacy-conscious automated security review of a large repository (legacy .NET) where: you cannot or do not want to feed full repo into an LLM, you need prioritized findings with file/line evidence, or you must tune runtime/iteration limits to avoid stalls and truncation. Ideal for baseline audits, triage runs, and regression checks after dependency or build changes.
Best used by agents with shell/CLI and tool orchestration capabilities (Codex/Copilot-style or OpenClaw agents that can run audit scripts and manage local model endpoints).
This skill has not been reviewed by our automated audit pipeline yet.