
from anti-distillation-skill22
Generate targeted "pollution" content and personas to reduce the risk of being distilled into reusable AI models; helps individuals inject subtle or aggressive
This skill helps individuals produce structured "pollution" content (persona configs, trap snippets, and execution plans) from personal data sources (chat logs, Git commits, documents) to make automated model distillation less reliable. It guides intake, selects a pollution mode (subtle/aggressive/chaos), and outputs actionable files intended for manual use.
Use when you want to protect your personal knowledge and communication style from being copied into organizational models — for example before leaving a job, when exporting chat logs or repos, or when you want to introduce ambiguity into public artifacts. Not for automated modifications of company data; outputs are manual suggestions.
Likely usable by general-purpose chat/coding agents that can run Bash and file Read/Write tools; compatible with agents that support prompts and file operations (e.g., Copilot-style assistants, Claude/ChatGPT integrations).
This skill has not been reviewed by our automated audit pipeline yet.