
from modal-auto-research-skills21
Orchestrate multiple autonomous Claude Code agents across separate GPUs or sandboxes to run parallel experiments, debugging sessions, or batch workloads with st
This skill provides an orchestration pattern to spawn and manage multiple autonomous Claude Code agents, each running in its own GPU-backed sandbox or as a lightweight experiment caller. It describes deployment best-practices (deploy once, call many), interactive SSH-driven sandboxes for debugging, and non-interactive batch experiment patterns. The skill includes tools for agents to report structured findings and for the parent process to monitor progress and read trajectories or summaries.
Use this skill when you need to run parallel experiments (hyperparameter sweeps, model comparisons), concurrent debugging sessions across GPUs, or distributed data-processing agents that must report structured results back to a coordinator. It is ideal for workloads where isolation per-agent (separate sandbox/VM/GPU) improves reliability or throughput.
references/sub-agents.md with full orchestration guidance.Primarily designed for Claude Code / Claude-based subprocess orchestration; patterns apply to other LLM agent runtimes that support external process invocation and remote sandboxes (e.g., Sonnet, custom Claude CLI integrations).
This skill has not been reviewed by our automated audit pipeline yet.