In the rapidly evolving landscape of enterprise technology, the traditional product roadmap is being replaced by a more fluid, real-time strategy. Vijay Raina, a seasoned specialist in enterprise SaaS and software architecture, joins us to discuss how the industry is pivoting toward “crowdsourcing” innovation. With a deep background in designing scalable systems, Vijay provides a unique perspective on the shift from rigid quarterly releases to the dynamic, agentic AI models that are currently reshaping the corporate world. He explores the move toward thematic development, the importance of “last-mile” technology, and the delicate balance of maintaining system stability while iterating at a dizzying pace.
The following discussion explores the transition from structured product timelines to a bottom-up innovation strategy, the technical challenges of building an agentic operating system, and the methodology of using a global customer base as a real-time research and development lab.
Large enterprises are shifting from quarterly reviews to weekly feedback loops to keep pace with AI development. How do you manage the technical overhead of such frequent releases, and what specific metrics do you use to ensure these rapid updates do not compromise system stability?
Managing the technical overhead for 18,000 customers requires a complete departure from the slow, traditional release cycles of the past. We have moved away from waiting six months for feedback and instead have implemented various “gates” that allow us to try out new features with smaller groups before a broad release. This week-by-week reaction creates a high-pressure environment where code must be pushed fast, yet the stability is maintained through rigorous automated checks and observability themes. It is a visceral shift for engineering teams who now see their work influence the real world in days rather than quarters. The emotional reward of seeing a customer’s problem solved in a single week is what keeps the momentum going despite the exhausting pace of development.
Many companies are abandoning fixed product timelines in favor of thematic goals like observability and deterministic controls. How do you decide which customer-requested features become universal platform components, and what are the steps for identifying a niche workflow that has broad market potential?
We focus on overarching themes like agent context and deterministic controls to guide our engineering efforts rather than a list of static features. When a specific client, such as a federal credit union, develops a successful IT service management workflow using our tools, we analyze if that success can be replicated across the broader platform. We take those real-world problems, classify them, and determine which parts can be handled by the large language model and which require specialized agentic operating system components. This bottom-up strategy assumes that if one enterprise finds a way to slim down their tech stack using a specific workflow, others will eventually face the same need. It turns our customer base into a wellspring of information that directly dictates where we invest our long-running innovation resources.
Enterprises often struggle with the “last-mile” technology needed to make large language models functional for specific business tasks. What are the primary hurdles when building an agentic operating system around these models, and how do you ensure the AI interactions feel natural to the end user?
The primary hurdle is that while large language models are incredibly powerful, they lack the specific business context and “last-mile” connectivity to be truly useful out of the box. Building an agentic operating system involves creating the infrastructure that allows these models to perform fully autonomous behaviors within a secure enterprise environment. We rely heavily on real-world testing to refine the sensory experience of these tools; for instance, if a customer reports that a voice agent feels unnatural during a hotel booking, we iterate on that interaction immediately. Seeing the A/B test results improve after a quick tweak to the agent’s conversational flow proves that the “last-mile” is as much about human feeling as it is about technical logic. We are constantly bridging the gap between the raw power of the LLM and the nuanced, deterministic needs of a professional user.
Using internal employees as the primary testing ground is a common strategy for refining AI tools before a public launch. How does your internal feedback loop influence the product’s development, and what trade-offs occur when your internal needs differ significantly from those of your external enterprise clients?
Our internal employees are our most demanding and prolific users, acting as a critical front line for every new AI tool we develop. When the AI boom truly ignited with the release of ChatGPT, we didn’t just watch from the sidelines; we aggressively moved teams and resources to create a dedicated internal AI unit. This internal “dogfooding” allows us to feel the friction of a tool before it ever touches a customer’s environment, ensuring we catch bugs that might otherwise slip through. Sometimes our internal needs are more advanced than those of our clients, which can create a gap, but we use that as a predictive signal for where the market will be in six months. It is an intense, high-stakes way to work, but it ensures that we are never blindsided by the rapid advancements in technology that occur almost monthly.
Relying on a “customer-led” roadmap assumes that users already know how AI should function within their business. What are the risks of following customer feedback too closely when technology is evolving this fast, and how do you balance immediate user requests with long-term strategic innovation?
The most significant risk is that many enterprises are still desperately trying to figure out what role AI should play and have yet to find tangible value in the technology. If we only follow immediate requests, we might build a series of short-term fixes rather than a cohesive, long-term platform. We balance this by maintaining a “long-running innovation track” that focuses on fully autonomous behaviors, even if customers aren’t asking for them yet. It is a delicate dance—we have to react weekly to fix a broken voice interaction, but we also have to be the architects who envision the “agentic” future that hasn’t been built. We provide the platform for their current success while simultaneously engineering the tools they will need a year from now, even if they don’t know it yet.
What is your forecast for agentic AI?
I believe we are moving toward a world where the “agentic operating system” becomes the central nervous system of the enterprise, moving far beyond simple chatbots to fully autonomous digital coworkers. Currently, we are seeing the terminology shift in real-time—agents weren’t even a major part of the conversation a year and a half ago, and now they dominate every roadmap. My forecast is that as LLMs improve and context becomes deeper, these systems will handle increasingly complex, multi-step business processes without any human intervention. The companies that succeed will be those that have built the “last-mile” infrastructure to support these agents, allowing them to act with the same deterministic precision as a human expert. We are just at the beginning of seeing how these autonomous behaviors will redefine productivity on a global scale.
