As we talked about in Part 1 of this series, every company has embarked on one or more LLM (Large Language Model) deployments. AI Agents are now able to autonomously conduct multi-turn queries and actions to iteratively solve and act on given problems. But the big question: How should we start using them?
In this part two in our series on onboarding clients, we’ve honed our onboarding to optimize for quicker success and we’ve seen huge improvement in aligning to customer goals when agentic AI is correctly onboarded.
Relvy is our AI Agent for automating software troubleshooting. We augment software engineering teams with AI that’s capable of querying and analyzing logs, metrics, traces, code, documentation, runbooks and events in response to an alert.
How does Relvy now know what debugging steps make sense in complex, unfamiliar customer environments?
Relvy learns from feedback from engineers during an initial iterative setup + onboarding period which can last a few hours to a few days.
During this time, Relvy:
At the end of this process, Relvy is equipped with a set of workflows that form the basis of its agentic troubleshooting for future alerts at that organization. Of course, Relvy comes up with new workflows when the current set is not sufficient, and engineers are always encouraged to review/update these workflows. This is how Relvy gets better with time.
About Relvy
We’ve paired our cost effective custom tuned language models which operate at 1/200th the cost of existing foundational models to make 24/7 agentic AI monitoring and debugging a reality. Get started instantly and see how Relvy can drastically reduce debugging time and costs, transforming your engineering processes today.