vLLora
by vllora
Real-time debugging and observability gateway for AI agents and LLM interactions.
What it does
vLLora acts as a lightweight debugging gateway for AI agents. It intercepts LLM calls to provide real-time tracing, analysis, and optimization of agent workflows, making it easier to diagnose where agents fail or hallucinate.
Tools
trace_interaction: Captures and analyzes the flow of a specific LLM request/response pair.mcp_connect: Integrates and manages connections to external MCP servers via HTTP/SSE.analyze_workflow: Provides observability into multi-step agentic tool use.
Installation
Install via Homebrew:
brew tap vllora/vllora
brew install vllora
Then run the command vllora to start the server on port 9090.
Supported hosts
Supported via OpenAI-compatible API endpoints, meaning it works with any client that can target a custom base URL, including various AI agent frameworks and custom implementations.
Quick install
brew install vlloraInformation
- Pricing
- free
- Published
- 4/15/2026
- stars






