AI Agents in Production Are Flying Blind — AgentLens Fixes That
Many AI agents in production lack proper observability, making it difficult to trace errors, costs, or performance issues across multiple LLM calls. Existing tools like LangSmith and Helicone offer limited support for non-LangChain or custom agents. AgentLens addresses this gap with an open-source platform that provides full visibility into agent runs without requiring code changes. It supports proxy-based tracing, SDK integrations, and self-hosting for comprehensive monitoring.
Opening excerpt (first ~120 words) tap to expand
try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 3902913) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Farzan Hossan Shaikat Posted on Apr 29 AI Agents in Production Are Flying Blind — AgentLens Fixes That #agents #ai #llm #monitoring The Visibility Problem Running an AI agent in production means dealing with a problem most developers hit quickly. The agent makes 15–20 LLM calls per session — chained, conditional, sometimes parallel. Something goes wrong. The output is bad, the cost spiked, or the agent looped.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at DEV.to (Top).