
from happycapy-skills86
Run parallel queries against multiple models and view side-by-side responses in a live dashboard; synthesize consensus and run anonymous model voting.
LLM Council runs parallel queries across multiple AI models and exposes a live web dashboard to compare responses, synthesize a consensus, and collect anonymous model-to-model votes. It includes a lightweight Python server, AI Gateway client code, and a static dashboard (JS) that streams SSE responses for interactive comparison.
Use when you want to compare outputs from different large models (research, prompt testing, multi-model evaluation), when you need a quick visual side-by-side comparison, or when you want to aggregate model opinions into a synthesized answer. Particularly useful for teams evaluating model behavior, running multi-model experiments, or debugging model disagreement.
Best for agents that can run local scripts and expose ports (Claude Code, Codex, agents with shell access) and for human-in-the-loop workflows where a browser dashboard is acceptable.
This skill has not been reviewed by our automated audit pipeline yet.
Resume Assistant
End-to-end resume creation and preparation assistant: uncovers achievement stories, matches jobs, optimizes resumes to JDs, runs mock interviews, and builds per
Film Creator
End-to-end AI film production assistant that turns a single sentence or image into a polished 30-second cinematic short (screenplay, per-scene video, and assemb
Reddit Post Writer
Generate authentic, emotionally-driven Reddit posts using a phased workflow that prioritizes voice, messiness, and an adversarial review to avoid AI-sounding ou
HappyCapy Skill Creator
Automates creating and adapting Claude-style skills by finding similar skills, cloning them, applying requested features with an LLM, and fixing runtime incompa