Today we’re joined by Vijay Raina, a leading specialist in enterprise SaaS technology and software architecture. We’ll be dissecting Google’s latest Android security report, which reveals a fascinating shift in the battle against mobile malware. Our conversation will explore how enhanced pre-review checks and AI-human collaboration are making the Play Store a tougher target, the resulting migration of malicious activity to the broader Android ecosystem, and the significant strides made in curbing excessive data access by apps.
Policy-violating app submissions and developer account bans have both seen significant year-over-year decreases. Beyond AI, what specific ‘pre-review checks’ have been most effective, and how do you measure their deterrent effect on bad actors before they even submit an app?
It’s a fantastic question because it highlights that security is about building higher walls, not just catching intruders. The most effective measures have been the mandatory developer verification and stringent testing requirements. Think of it as raising the cost of entry. Before, a bad actor could spin up countless anonymous accounts with ease. Now, with verification, there’s a real identity tied to the account, which creates accountability. The mandatory testing requirements mean they can’t just submit a barely functional shell of an app; it has to meet a certain quality standard. This combination makes it far more laborious and expensive to even attempt to publish a malicious app. The deterrent effect is crystal clear in the numbers: developer account bans plummeted from 333,000 in 2023 to just over 80,000 in 2025. They aren’t even trying as much because the barrier to entry is simply too high for many.
The app review process now integrates generative AI models to help human reviewers find complex malicious patterns. Could you walk me through an example of how this AI-human collaboration works in practice and what kinds of subtle threats it uncovers that were previously missed?
Certainly. Imagine an app that passes all of the initial 10,000 automated safety checks. It doesn’t contain known malware signatures, and its requested permissions seem plausible on the surface. However, a generative AI model, trained on the entire history of the Play Store, might flag it. The AI doesn’t just see code; it sees behavior and intent. It might notice a subtle, yet unusual, sequence of network calls combined with a request for contact list access that is characteristic of a new strain of spyware, even if the code itself is unique. The AI then escalates this to a human reviewer, not with a simple “pass/fail,” but with a narrative: “This app’s behavior pattern shares a 95% similarity with tactics used in financial fraud apps from last quarter.” The human expert, armed with this context, can then perform a much more targeted deep-dive, uncovering a sneaky subscription trap that a purely automated system would have missed entirely. It’s a powerful fusion of machine-scale pattern recognition and human intuition.
While malicious submissions to the Play Store are decreasing, the number of new malicious apps identified elsewhere on Android has more than doubled to over 27 million. How does this shift in strategy by bad actors impact your security focus for the broader Android ecosystem?
This is what we call the “squeeze the balloon” effect. As we fortify the Play Store, the pressure pushes bad actors to seek softer targets outside of it, like third-party stores or direct downloads. This dramatically shifts our security posture. The focus can no longer be solely on gatekeeping at the store’s entrance. Instead, it becomes about robust, on-device protection. This is where Google Play Protect becomes the star of the show. It has to act as a vigilant security guard living right on your phone, constantly scanning and identifying threats no matter where they come from. The numbers are staggering—jumping from identifying five million non-Play Store threats in 2023 to over 27 million in 2025. It tells us the fight has moved from a centralized battleground to a decentralized, guerrilla war on millions of individual devices.
There was a dramatic drop in apps blocked for seeking excessive data access, from 1.3 million down to just 255,000. What specific policy changes or AI detection advancements led to this steep decline, and how do you ensure legitimate apps aren’t being overly restricted?
That incredible drop is a direct result of a two-pronged approach. First, there have been much clearer and stricter policies about data access and what constitutes a legitimate need for sensitive permissions. Developers are now required to justify why their flashlight app needs access to your contacts, for example. Second, the AI-powered detection systems have become incredibly adept at “contextual analysis.” The AI doesn’t just see a permission request; it analyzes the app’s core function to determine if the request is logical. This precision is key to avoiding over-restriction. A photo editing app asking for storage access is fine, but one asking for your call logs is a major red flag. By codifying these rules and automating their enforcement with smart AI, we’ve essentially educated the developer community and weeded out the bad actors who were just grabbing as much data as they could.
What is your forecast for the evolution of mobile malware, particularly as bad actors begin leveraging their own AI tools to circumvent security systems?
My forecast is that we’re entering an era of an AI-driven arms race in cybersecurity. The future of mobile malware won’t be about static, identifiable code signatures. Instead, bad actors will use their own generative AI to create polymorphic malware—threats that can rewrite their own code with every new infection, making them incredibly difficult to detect with traditional methods. These AI-powered attacks will be more personalized, more adaptive, and capable of identifying system vulnerabilities in real-time. Our defense, in turn, must become equally dynamic. We will rely more and more on AI security models that focus on detecting anomalous behavior rather than just malicious code, essentially creating a digital immune system for our devices. The fight will be faster, more complex, and almost entirely waged between competing AI systems.
