
from awesome-agent-skills-for-empirical-research218
Guidance and reference for the marginaleffects R/Python package: computing predictions, comparisons, slopes, and average treatment effects with practical exampl
This skill encapsulates the manual and pedagogical material for the marginaleffects package and the companion book ‘Model to Meaning’. It helps agents guide users through choosing estimands (predictions, comparisons, slopes), constructing counterfactual grids, aggregating results (ATE, ATT, CATE), and selecting inference methods (delta, bootstrap, Bayesian). The skill includes language-specific examples for R and Python and explains how to use core functions like predictions(), comparisons(), slopes(), avg_* variants, and datagrid().
Use when users ask how to interpret model outputs, compute marginal effects or average treatment effects, set up counterfactual comparisons, or run hypothesis tests on derived quantities. It's appropriate for both conceptual framing (five-question framework) and concrete code examples in R or Python.
Agents capable of generating or reviewing R/Python statistical code (Codex/Copilot-style assistants, Claude Code, Cursor) are the best fit; also useful for agents that help interpret outputs to non-technical stakeholders.
This skill has not been reviewed by our automated audit pipeline yet.
Obsidian CLI
Control and automate Obsidian vaults from the command line: read, create, search, manage notes and tasks, and support plugin development with reload, eval, and
STATA Code Patterns for Accounting Research
A pattern library of STATA code extracted from JAR replication files (2017–2025). Provides tested syntax for DiD, IV, RDD, event studies, survival analysis, reg