The recent market turbulence that saw nearly US$1 trillion in SaaS market value vanish in a matter of days has sparked a fierce debate over the future of enterprise software. To help navigate this shifting landscape, we are joined by Vijay Raina, a seasoned specialist in enterprise SaaS technology and software architecture. Vijay brings a wealth of experience in understanding how high-level software design interacts with real-world business constraints, making him the perfect guide to decode whether artificial intelligence is a true existential threat or a catalyst for the next generation of software evolution.
Our conversation explores the structural friction that prevents companies from simply “prompting” their way to a full software suite, the critical role of risk transfer in corporate procurement, and the enduring power of proprietary data moats. We also examine the painful but necessary transformation of SaaS business models as they move away from traditional per-user pricing toward a more rigorous, value-based approach.
Writing code is often only a small fraction of the total effort required to maintain, scale, and secure production-grade software. How do firms balance the speed of AI prototyping with the necessity for long-term reliability and security compliance? What specific metrics determine when a project should stay in-house?
The excitement around “vibe coding” and instantaneous prototyping is palpable, but it overlooks the gritty reality that writing code is merely 10% of the total workload. The remaining 90% is a relentless cycle of maintaining, scaling, and debugging for unpredictable edge cases. Production-strength software is a living organism that demands 24/7 reliability and absolute API stability, which are qualities that inherently random LLM systems struggle to replicate consistently. A firm might be tempted by the speed of a tool like Anthropic’s Claude, but they must measure the project’s success by its auditability and security compliance rather than just its deployment speed. If a project requires a level of self-certification and liability insurance that exceeds the cost of a specialized subscription, it almost always makes more sense to outsource it to a specialist who can spread those high-level security costs across thousands of customers.
Organizations frequently use third-party software to transfer risks like GDPR indemnity and international security certifications. If a company builds its own systems using general-purpose AI, how do they manage the absence of a vendor to hold accountable? What are the practical steps for handling liability in this scenario?
When a Fortune 500 company signs a contract with a SaaS provider, they aren’t just paying for a functional tool; they are purchasing a shield of GDPR indemnity and ISO certifications. If that company bypasses the vendor to build an in-house system using a general-purpose model like Google’s Gemini, they are essentially choosing to stand alone in the line of fire. In the event of a catastrophic data leak, there is no vendor to sue, no platform to hold accountable, and no third-party security patch to wait for. Practically speaking, this means the firm’s internal legal and IT departments must take on the massive overhead of becoming their own compliance officers. For most, the “efficiency” gained by using AI is quickly devoured by the sheer weight of carrying that total liability and the insurance premiums required to cover potential failures.
Fragmented software environments often lead to isolated legacy systems that are difficult for future teams to understand or audit. How can companies ensure interoperability when building bespoke AI tools? What are the long-term consequences for a business that finds itself unable to update its own AI-generated code?
The dream of bespoke in-house software often turns into a nightmare of fragmented legacy piles that no one truly understands. If every department starts spinning up its own custom AI-written tools, you end up with a digital Tower of Babel where systems cannot speak to one another, destroying interoperability. The long-term consequence is a state of “technical debt” where the code is a black box; if the original prompter leaves the company, the business is left with a static pile of logic that cannot be safely updated. This lack of transparency makes it nearly impossible to maintain the API stability required for modern business ecosystems. Ultimately, firms risk returning to a pre-SaaS era of isolated, clunky systems that hinder rather than help growth, making them less agile than competitors who stick with standardized, well-documented platforms.
Generic AI models rely on public data, while high-value, structured information remains locked behind professional paywalls and proprietary moats. How do these data barriers protect specialized firms from being displaced by general-purpose technologies? What strategies allow companies to leverage proprietary data without compromising their intellectual property?
It is a fundamental mistake to assume that LLMs have access to the entirety of human intelligence; in reality, the most valuable, highly structured data is locked behind the moats of companies like LexisNexis, Thomson Reuters, and Nielsen. These specialized firms hold the information required for professional-grade decisions, which generic models simply cannot access or replicate without permission. This paywall barrier ensures that the owners of the data, not the owners of the AI models, maintain the ultimate leverage in the market. To leverage this without compromise, companies must focus on building specialized “wrappers” or private environments where their proprietary data can be processed without ever leaking back into the public training sets of general models. By keeping their most organized and predictable information behind these moats, firms ensure that they aren’t just recycling public-domain noise but are instead generating deep, exclusive insights.
The traditional model of per-user pricing and automatic annual increases is facing significant pressure as firms demand more measurable value. How should software providers restructure their business models to remain relevant? In what ways does a leaner, more automated workforce change the way software is developed and supported?
The era of the “cosy convention,” where SaaS firms could slap on a 5% annual price increase for features nobody asked for, is officially coming to an end. We are seeing major players like Block reduce their headcount by over 4,000 positions, while Atlassian recently retrenched 10% of its staff to lean into this new reality. To remain relevant, software providers must pivot toward business models that demonstrate undeniable, measurable value rather than just counting seats. This means being prepared for a customer base that can now wield a credible threat to build in-house, even if they don’t always follow through. A leaner, more automated workforce allows these SaaS companies to focus their remaining talent on high-level architecture and security, ensuring the software they sell is vastly superior to anything a general-purpose AI could whip up in an afternoon.
If the complexity of auditing bespoke, AI-generated systems increases exponentially, how will the role of professional auditors evolve? Could this shift create a new job market for specialists who verify machine-written code? Please explain the step-by-step process for ensuring these systems meet rigorous industry standards.
As bespoke AI systems proliferate, we are likely to see the auditing profession become one of the most sought-after careers on the planet. The complexity of verifying machine-written code doesn’t just grow linearly; it grows exponentially with every new custom feature, creating a massive demand for specialists who can bridge the gap between AI output and industry standards. The verification process begins with a deep forensic analysis of the code’s logic to ensure it isn’t hallucinating critical functions, followed by rigorous stress-testing against known security frameworks. Next, auditors must verify that the system maintains GDPR and ISO compliance through every layer of its data processing. Finally, there must be a continuous monitoring loop to ensure that as the AI model evolves, it doesn’t deviate from these established safety and performance benchmarks, providing a human “seal of approval” that no machine can give itself.
What is your forecast for the SaaS industry?
I believe the SaaS industry is currently being “shaken and stirred,” but it will ultimately emerge much leaner and significantly stronger. While the recent market sell-off was a shock to the system, it serves as a necessary culling of providers who cannot deliver clear, measurable value beyond basic functionality. The survivors will be those who embrace AI to enhance their specialized expertise rather than fighting against it. We will see a shift where the “specialists” prove their worth over “generalists,” much like how everyone can buy a shovel, but most still choose to pay someone else to shovel their snow when the blizzard hits. By 2027, the industry will have moved past the hype of “vibe coding” and back toward a focus on the hard, necessary work of risk transfer, security compliance, and high-level architectural stability.
