Show HN: Helicone (YC W23) – OSS LLM Observability and Development Platform Hey HN, we're Justin and Cole, the founders of Helicone ( https://helicone.ai ). Helicone is an open-source platform that helps teams build better LLM applications through a complete development lifecycle of logging, evaluation, experimentation, and release. You can try our free demo by signing up ( https://ift.tt/9siQPXG ) or self-deploy with our new fully open-source helm chart ( https://ift.tt/Ra6ICBJ ). When we first launched 22 months ago, we focused on providing visibility into LLM applications. With just a single line of code, teams could trace requests and responses, track token usage, and debug production issues. That simple integration has since processed over 2.1B requests and 2.6T tokens, working with teams ranging from startups to Fortune 500 companies. However, as we scaled and our customers matured, it became clear that logging alone wasn’t enough to manage production-grade applications. Teams like Cursor and V0 have shown what peak AI application performance looks like and it's our goal to help teams achieve that quality. From speaking with users, we realized our platform was missing the necessary tools to create an iterative improvement loop - prompt management, evaluations, and experimentation. Helicone V1: Log → Review → Release (Hope it works) From talking with our users, we noticed a pattern: while many successfully launch their MVP quickly, the teams that achieve peak performance take a systematic approach to improvement. They identify inconsistent behaviors through evaluation, experiment methodically with prompts, and measure the impact of each change. This observation shaped our new workflow: Helicone V2: Log → Evaluate → Experiment → Review → Release It begins with comprehensive logging, capturing the entire context of an LLM application. Not just prompts and responses, but variables, chain steps, embeddings, tool calls, and vector DB interactions ( https://ift.tt/3zaRfcr ). Yet even with detailed traces, probabilistic systems are notoriously hard to debug at scale. So, we released evaluators (either via LLM-as-judge or custom Python evaluators leveraging the CodeSandbox SDK - https://ift.tt/CLNV6zG ). From there, our users were able to more easily monitor performance and investigate what went wrong. Did the embedding search return poor results? Did a tool call fail? Did the prompt mishandle an edge case? But teams would still edit prompts in a playground, run a few test cases, and deploy based on intuition. This lacked the systematic testing we’re used to in traditional software development. That’s why we built experiments (similar to Anthropic's workbench but model-agnostic) ( https://ift.tt/9BdeT4y ). For instance, when a prompt generates occasional rude support responses, you can test prompt variations against historical conversations. Each variant runs through your production evaluators, measuring real improvement before deployment. Once deployed, the cycle begins again. We recognize that Helicone can’t solve all of the problems you might face when building an LLM application, but we hope that we can help you bring a better product to your customers through our new workflow. If you're curious how our infrastructure handled our growth: Our initial architecture struggled - synchronous log processing overwhelmed our database and query times went from milliseconds to minutes. We've completely rebuilt our infrastructure with two key changes: 1) using Kafka to decouple log ingestion from processing, and 2) splitting storage by access pattern across S3, Kafka, and ClickHouse. This was a long journey but resulted in zero data loss and fast query times even at billions of records. You can read about that here: https://ift.tt/15elAwP... We'd love your feedback and questions - join us in this HN thread or on Discord ( https://ift.tt/VLDChdP ). If you're interested in contributing to what we build next, check out our GitHub. https://ift.tt/GjOH4CI January 23, 2025 at 11:28PM
0 Comments: