Pichai's security warning: AI exploits are faster than patches, and the market hasn't reacted yet

robot
Abstract generation in progress

The speed of AI finding vulnerabilities—humans can’t catch up

What Pichai said on a podcast has been underestimated: AI models can now bulk-find vulnerabilities in something close to a pipeline-like, repeatable way, at outrageously low cost. Internal Google data shows that in 2025, about 90 new exploit chains were discovered. Anthropic’s models found thousands of flaws at very small cost. Back then, a single zero-day could sell for $100k; now this pricing model is collapsing.

Even more notable is the industry’s silence. After Pichai’s comments spread through the tech world, hardly any senior security professionals came out to say “the risk is being exaggerated.” This silence itself is consensus: as offensive capability expands with compute, defense can’t keep up.

Pichai also said something that’s not very comforting: industry-level defensive coordination “has not happened, currently.”

What this means in practical terms:

  • Security shifts from “patch after deployment” to a hard constraint of “can we even deploy it?” Compliance-focused CIOs may delay deploying cutting-edge AI by a year or more, waiting for more auditable architectures
  • Players integrating AI + security gain an advantage. Platform vendors like CrowdStrike and Palo Alto are better positioned than those pure AI labs that focus only on model capability and don’t manage responsibility boundaries
  • Regulation is still a blank page. Right now, people emphasize “voluntary industry initiatives,” but national-level attackers are already probing edge devices. In the short term it looks fine, but long term there are risks

The “AI is just a tool” argument ignores asymmetry

A common take is: “AI risk will be naturally absorbed, and technology will always adapt.” But data from Google Threat Intelligence Group provides a counterexample: in 2025, the number of zero-days hit a new high, and 48% directly targeted enterprise software.

AI has driven down the cost of attacking, but the difficulty of defending hasn’t fallen proportionally. Software trust is quietly eroding. After Pichai spoke, Google’s stock price barely moved, suggesting the market didn’t fully price in what that implies: all those applications above are built on a base that’s more fragile than before.

Viewpoint Evidence/claims Implications for the industry My take
AI optimists (inside the labs) Pichai treats security as an “implicit constraint” alongside hardware; the market shows no obvious selling pressure Attention is still on compute expansion, but enterprise pilots will be more cautious Underestimated the scale of the problem. Closed ecosystems help, but coordination failures will magnify losses
Security bulls (Wall Street analysts) CrowdStrike and Palo Alto gained momentum from Anthropic vulnerability mining; Wolfe Research proposed “network warfare at machine speed” Capital concentrates into AI-native security platforms, with top players hitting a $10B ARR shock Largely agree. These companies are exactly positioned to capture the asymmetric advantage Pichai described
Risk skeptics (policy circles) Zero-day numbers year-over-year +15%; no regulatory response within 48 hours after incidents The status quo continues until “enough big” happens Betting on “calm seas” is unwise. Pressure is building
National-level threat observers (GTIG) China-related actors attack edge devices; commercial spyware vendors coordinate to exploit mobile endpoints Edge security premium increases; fragmented vendors face pressure True, but it overlooks how fast technology spills over to a wider set of attackers

Bottom-line judgment: Pichai’s warning forces the industry to face a reality that’s been coming late—offense-defense asymmetry, with offense holding the advantage. In the AI stack, security is natively built in, making it more resilient than patching after the fact. By the time the first major AI-facilitated breach makes headlines, late-to-the-game investors will likely pay tuition.

Importance: High

Category: AI Security|Industry Trends|Technical Insights

Conclusion: The window is still open, but it’s narrowing. Platform-style security vendors and builders who can move security forward into the AI stack have a relative advantage. On the transaction level, security leaders are more favorable on the long side; pure model labs and passive holders will face pressure from late pricing and responsibility premium.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments