Back to Skills

Code Mode for MCP Servers
from code-mode-skill20
Add a sandboxed code mode tool to an MCP server so LLMs run small processing scripts against large API responses and only the script output enters the model con
Triggers:code modecontext reductionsandbox executionreduce token usagerun user scriptextract from large response
What it does\nAdds a 'code mode' capability to an MCP server: when a tool returns large JSON payloads, the LLM supplies a small extraction script which runs in a sandboxed runtime against the raw data. Only the script's stdout is returned to the model, dramatically reducing tokens passed into context (typical 65–99% reduction).\n\n## When to use it\nUse this skill when tools produce very large responses (lists of pods, users, issues, etc.), when token cost or context window limits matter, or when you need LLM-written code to extract/aggregate/filter large API results securely. Trigger on mentions of reduce token usage, shrink API responses, sandbox execution, or add code execution tool.\n\n## What's included\n- Scripts: none bundled, but the skill contains concrete planning and implementation steps for an executor and benchmark.\n- References: links to inspiration and sandbox options are documented in references (sandbox-options.md, benchmark-pattern.md).\n- Instructions: step-by-step plan: discover server language, select sandbox (quickjs/pyodide/wasm/etc.), implement executor, wire tool, and run benchmarks comparing before/after context sizes.\n\n## Compatible agents\nBest used with agents that can run planning + implementation flows (developer-facing agents like Copilot/Claude Code/Code assistants). The skill is language/runtime-agnostic and recommends sandbox choices per server language (Node, Python, Go, Rust).
Tags
Information
- Repository
- code-mode-skill
- Stars
- 20
- Installs
- 0