
from deepswarm112
Orchestrate large-scale parallel AI workers for batch and multi-turn tasks with auto-calculated worker counts, stagger, and tiered model delegation for cost-eff
DeepSwarm runs and manages N parallel worker processes to execute large batch or multi-turn API tasks. It auto-calibrates worker count, stagger delays, and batch sizing to maximize throughput while avoiding rate limits. Supports tiered delegation where an orchestrator (frontier model) plans and cheaper workers execute at scale.
Use for long-running generation, translation, summarization, or classification pipelines that benefit from parallelization (call durations >10s), or when you need crash-resilient checkpointing and high throughput across many seeds. Not intended for short synchronous calls or tightly coordinated inter-worker tasks.
Useful for agents that can schedule and monitor long-running background jobs, pipelines, or developer-run CLI workflows (Hermes-style orchestrators, tmux-agent orchestrators, and batch-processing tools).
This skill has not been reviewed by our automated audit pipeline yet.