
from openclaw-nim-skill12
Call NVIDIA NIM-hosted LLMs from OpenClaw to offload heavy model work and conserve main-agent tokens.
This skill integrates OpenClaw with NVIDIA's NIM platform, allowing the agent to call external models (GLM-5, Kimi, Llama 3.1, etc.) via an nvapi key. It wraps usage patterns for listing available models and invoking them with a prompt, helping agents delegate specific tasks to specialty models while preserving main-agent token budgets.
Use when you need to run inference on models hosted on NVIDIA NIM—for tasks that require specific model capabilities (e.g., code generation, multilingual reasoning, large-context jobs) or when you want to distribute load away from the primary agent. Suitable for batch inference, ad-hoc model calls, or when testing different model aliases.
python3 scripts/nim_call.py list and python3 scripts/nim_call.py <model_alias> "<prompt>".Agents that can run Python helper scripts or shell commands (OpenClaw main, CLI agents). Likely compatible with any agent that can execute subprocesses or call external HTTP-based model APIs.
This skill has not been reviewed by our automated audit pipeline yet.