Who is Winning the AI Cybersecurity Arms Race?

Who is Winning the AI Cybersecurity Arms Race?

As a leading expert in enterprise SaaS technology with a focus on software architecture, Vijay Raina offers a unique perspective on the intersection of AI and cybersecurity. We’re discussing depthfirst, a new security startup that just closed a significant $40 million Series A round to build what it calls an AI-native defense platform. Our conversation will explore how this fresh capital will be deployed to scale their team and technology, delve into the mechanics of their General Security Intelligence platform, and examine how their defensive AI is being designed to counter the rising tide of AI-powered cyberattacks. We’ll also touch upon the strategic advantage of their leadership’s blended background in both pure AI research and practical security engineering, and what early results from their initial partnerships reveal about the state of modern software security.

You recently secured a $40 million Series A round led by Accel Partners. How will this capital specifically fuel your hiring in applied research and engineering, and what are the first key milestones you aim to achieve with this newly expanded team?

This $40 million is truly transformational for us. It’s not just about increasing headcount; it’s about strategically acquiring very specific, top-tier talent. In applied research, we’re targeting specialists in adversarial AI—people who live and breathe the methods attackers use to trick machine learning models. On the engineering side, we’re building out the core infrastructure to scale our platform’s analysis capabilities by orders of magnitude. The first major milestone is to reduce our code analysis time while simultaneously increasing the complexity of threats we can detect. We want our General Security Intelligence platform to be so fast and seamless that it becomes an invisible, indispensable part of the development lifecycle, not a bottleneck.

Your platform, General Security Intelligence, scans codebases and workflows. Can you walk us through how it specifically protects against credential exposures versus threats in third-party components? Please provide a step-by-step example of how it operates within a company’s typical development cycle.

Absolutely. Let’s imagine a developer is about to push new code. The moment they initiate that push, our platform springs into action in two parallel streams. First, it does a lightning-fast scan of the new code specifically for patterns that look like secrets—API keys, private tokens, database credentials. If it finds a match, it immediately flags it and can even block the commit, giving the developer instant feedback before a secret ever leaves their machine. At the very same time, the platform analyzes the manifest of dependencies, checking every open-source and third-party component. It doesn’t just check version numbers against a vulnerability database; our AI models analyze the behavior of that component to see if it could be a vector for a novel threat. This all happens within seconds, providing a comprehensive, two-pronged defense right inside the developer’s workflow.

As attackers increasingly use AI for everything from writing malware to scanning for vulnerabilities, how does your defensive AI evolve to counter these new threats? Could you share an anecdote or a metric that illustrates how your system stays ahead of these AI-driven exploits?

It’s a constant cat-and-mouse game, and staying ahead means moving beyond simple pattern matching. Attackers are using AI to create polymorphic malware that changes its signature with every deployment, making it invisible to traditional scanners. Our defensive AI is trained not to look for a specific signature but to understand the intent of the code. We had a situation with an early partner where our system flagged a piece of code that had passed all their existing security checks. It looked like a routine data logging script, but our AI recognized the underlying structure as a pattern commonly used for data exfiltration staging. It was an AI-generated script designed to be benign on the surface. That’s the difference—we’re not just looking for known threats; we’re identifying the fundamental building blocks of an attack before it’s even assembled.

Your leadership team brings together deep experience from AI-focused firms like Google DeepMind and security-centric companies like Square. How does this blend of AI and security expertise shape your product development, and what specific trade-offs do you navigate when building your technology?

This blend is our core strength. Our CTO from Google DeepMind is constantly pushing the envelope on what’s theoretically possible with AI in threat detection. Meanwhile, our co-founder from Square brings the battle-tested pragmatism of enterprise security, constantly grounding us in the reality of what security teams actually need. The biggest trade-off we navigate is the classic friction-versus-security dilemma. An incredibly complex AI model might catch the most obscure threats, but if it slows down a developer’s workflow by even a few seconds, they’ll find a way to bypass it. Our product development is a constant dialogue between these two mindsets—creating the most advanced defensive AI possible while ensuring it’s so fast and frictionless that it becomes a welcome safety net for developers, not a hurdle.

With early partnerships at companies like AngelList and Moveworks, what have been the most surprising or challenging security issues you’ve helped them identify? Can you describe the process of integrating your platform and the initial results they’ve seen in their security posture?

Working with innovative, fast-paced companies like AngelList and Moveworks has been incredibly insightful. The most surprising challenge isn’t a lack of security awareness but the sheer velocity of development. In such dynamic environments, temporary or “transient” credential exposures in automated scripts or configuration files are a common issue. A key might be exposed for only a few minutes during a deployment, but in a world of automated attackers, that’s more than enough time. The integration process is designed for this speed; we hook directly into their existing code repositories and CI/CD pipelines with minimal configuration. The initial results were immediate. Within the first week, they saw a dramatic drop in alerts for these simple but dangerous mistakes, which freed up their security teams to focus on deeper, more architectural security challenges.

What is your forecast for the evolution of AI-powered cyber attacks and defense over the next five years?

Over the next five years, the battlefield will shift from human-led attacks to autonomous, AI-driven campaigns. We’re going to see AI attackers that can independently discover a vulnerability, write a custom exploit for it, deploy that exploit, and then move laterally through a network to achieve their objective, all with minimal human oversight. Consequently, defense has to become equally autonomous. The era of a human analyst reviewing alerts and manually responding will be too slow. The future is autonomous defense—AI systems that can detect and neutralize these AI-driven campaigns in real-time, at machine speed. The fight will be AI versus AI, and victory will be measured in milliseconds.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later