
from brain-in-the-fish53
Evaluates documents against criteria using multi-agent, ontology-grounded scoring to produce evidence-backed, auditable evaluation reports.
eval_ingest, eval_criteria, eval_spawn, eval_record_score, etc.) for orchestration.\n- References: Detailed workflow, scoring guidelines, supported document types, and architecture notes describing ontologies and evidence scorer.\n- Instructions: Clear orchestration patterns for quick deterministic runs and full multi-agent scoring with debate/challenge cycles.\n\n## Compatible agents\n\nBest used with MCP-style agents that support subagent dispatch and long-running evaluation flows; also compatible with Claude/other multi-turn LLM subagents for scoring tasks.This skill has not been reviewed by our automated audit pipeline yet.