
from dazee-small32
Run multi-step autonomous research workflows: search, crawl, validate sources, and produce structured research reports and market analyses.
Performs iterative, multi-step research workflows that gather web sources, extract full content (via crawlers), cross-validate findings, and synthesize comprehensive reports. Emphasises source tracing and structured outputs suitable for market analysis, competitor reviews, and literature surveys.
Use when a user requests an in-depth report, competitive analysis, industry trend study, or any task requiring collection and synthesis from many web sources. Ideal when accuracy and source traceability matter.
Best with agents that can call web_search and crawler tools (agents with web access and Python execution like Claude Code, Crawl-enabled agents).
A 7-phase research methodology skill (no scripts) that guides agents through classification, scoping, hypothesis formation, retrieval planning, iterative querying, source triangulation, synthesis, QA, and packaging. Well-structured SKILL.md with clear phase breakdowns, source quality ratings, and output folder conventions. Purely instructional — relies entirely on agent compliance with no automation.
Clean skill with no security concerns. The 2-source rule and claim type hierarchy are thoughtful. The 'Web content is untrusted' principle is a good safeguard. No scripts mean no runtime verification possible — scoring is purely static. The references/full-methodology.md is mentioned but not bundled, so the full spec is inaccessible for auditing.