
from claw-compactor2,322
Six-layer token-compression toolkit for OpenClaw workspaces (rule engines, dictionary encoding, observation compression, RLE, protocol optimizations, plus Engra
Claw Compactor provides a multi-layer pipeline to compress workspace content and reduce LLM token costs. It combines deterministic rule-based compression (dedup, dictionary encoding, RLE, tokenizer optimizations) with an optional Engram LLM-driven Observational Memory layer for semantic summarization and long-term reflections. Includes CLIs to run benchmarks, full pipelines, and targeted layer runs.
Run at session start to shrink injected system context, before expensive LLM calls to lower cost, or as a regular maintenance cron to compress accumulated memory files. Use benchmark mode to estimate savings; run full pipeline when savings justify writes. Useful for teams managing large agent workspace histories.
Agents with workspace access and ability to run Python/CLI (OpenClaw, CLI-capable automation agents). Engram layer requires an LLM-compatible endpoint or API key for Anthropic/OpenAI-compatible services.
This skill has not been reviewed by our automated audit pipeline yet.