
from aaas-vault13
Guided playbook for integrating large language models into the game development lifecycle — design, prototyping, code generation, testing, and review.
A practical, end-to-end playbook that teaches agents how to use LLMs across game development tasks: from concept and design prompts, to iterative prototyping, AI-assisted coding, testing, and reviewing game systems. The skill codifies principles (plan before prompting, context-first, trust-but-verify) and enforces use of repository references for patterns, validations, and sharp-edge checks so outputs are grounded. It aims to make LLMs effective pair programmers for game dev workflows rather than autonomous creators.
Invoke this skill when the user asks for help with game-design prompts, prototyping mechanics, generating or refactoring game code, writing level scripts, producing NPC dialogue, or creating test cases for gameplay. Also useful for diagnosing AI-introduced bugs, producing prompt templates for iterative development, or setting up review guardrails before merging code generated by models.
Best fit for agent platforms that support code and reference grounding (Claude Code, Cursor, Agent Zero). The skill assumes ability to read repository reference files and execute iterative prompt/code cycles.
This skill has not been reviewed by our automated audit pipeline yet.