
from tfy-gateway-skills13
Configure and use TrueFoundry's OpenAI-compatible AI Gateway to route, rate-limit, and observe LLM calls across providers.
This skill documents how to configure and call TrueFoundry's AI Gateway — a single OpenAI-compatible endpoint that routes requests to many LLM providers (OpenAI, Anthropic, Azure, self-hosted vLLM, etc.), enforces rate limits and budgets, and provides observability. It includes language snippets (Python, Node, curl) and YAML examples for routing, load-balancing, rate limits, and budget controls.
Use this skill when you want a unified API to call multiple LLM providers, implement rate limiting or cost budgets, add failover/load-balancing between providers, or attach self-hosted models to a central gateway. Also useful when integrating the gateway with LangChain, LlamaIndex, or CI/CD flows that apply gateway config.
Typical callers: coding agents and automation that use OpenAI-compatible SDKs (Copilot/Codex-style integrations, LangChain runners, CI automation).
This skill has not been reviewed by our automated audit pipeline yet.