The breakthroughs in AI technology have shifted from model capabilities to system capabilities, with industry upgrades simultaneously shifting focus from isolated innovations to redefining infrastructure, workflows, and user interaction methods. Four investment teams at a16z provide key insights regarding 2026 from four dimensions: infrastructure, growth, healthcare, and interactive worlds. This article is based on a16z’s written piece, organized, translated, and authored by BlockBeats. (Previous highlight: a16z former partner’s major tech report: How AI Will Consume the World?) (Background supplement: a16z announces $10 billion new fund, focusing on AI, crypto finance, and defense technology) Summary: Over the past year, AI’s breakthroughs have shifted from model abilities to system abilities: understanding long sequences, maintaining consistency, executing complex tasks, and collaborating with other intelligent entities. As a result, industry upgrades have moved from point innovations to redefining infrastructure, workflows, and user interaction methods. In the annual “Big Ideas 2026,” four a16z investment teams offer key insights for 2026 from four dimensions: infrastructure, growth, healthcare, and interactive worlds. Essentially, they depict a trend: AI is no longer just a tool, but an environment, a system, and an agent acting alongside humans. Here are the four teams’ judgments on structural changes in 2026: As investors, our job is to delve into every corner of the tech industry, understand its dynamics, and judge the next evolution. Therefore, every December, we invite each investment team to share one “big idea” they believe tech entrepreneurs will tackle in the coming year. Today, we present perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Opinions from other teams will be published tomorrow—stay tuned. Infrastructure Team Jennifer Li: Startups will tame the “chaos” of multimodal data Unstructured, multimodal data has always been the biggest bottleneck for enterprises and the greatest untapped treasure. Every company is overwhelmed by PDFs, screenshots, videos, logs, emails, and various semi-structured “data sludges.” Models are becoming smarter, yet inputs are increasingly chaotic—leading to hallucinations in RAG systems, causing intelligent agents to err in subtle and costly ways, and still heavily relying on manual quality checks in critical workflows. Today, the real limiting factor for AI companies is data entropy: in a world where 80% of enterprise knowledge resides in unstructured formats, freshness, structure, and authenticity are continuously declining. Therefore, unlocking the “tangled mess” of unstructured data is becoming a startup opportunity for a generation. Enterprises need continuous methods to clean, structure, verify, and govern their multimodal data, enabling downstream AI workloads to truly perform. Use cases span everywhere: contract analysis, onboarding, claims processing, compliance, customer service, procurement, engineering retrieval, sales enablement, analytics pipelines, and all workflows relying on reliable context. Platforms capable of extracting structure from documents, images, and videos, harmonizing conflicts, repairing data pipelines, and maintaining data freshness and retrievability will hold the “key to enterprise knowledge and processes.” Joel de la Garza: AI will reshape the recruitment dilemma for cybersecurity teams Over the past decade, the biggest headache for CISOs has been hiring. From 2013 to 2021, the global cybersecurity job gap skyrocketed from less than 1 million to 3 million. The reason is that security teams require highly specialized technical talent but often assign them to exhausting frontline tasks, like log review, which almost no one wants to do. The deeper root problem is: security teams create this pain themselves. They buy “all-in-one detection” tools, forcing the team to “review everything”—which in turn creates a artificially “labor shortage,” forming a vicious cycle. By 2026, AI will break this cycle by automating most repetitive and redundant tasks, significantly narrowing the talent gap. Anyone who has worked in large security teams knows that half of the work can be fully automated; the problem is, when overwhelmed daily, you cannot step back to consider what should be automated. Truly AI-native tools will do this for security teams, allowing them to focus on what they want to do: tracking attackers, building systems, and fixing vulnerabilities. Malika Aubakirova: Intelligent-native infrastructure will become “standard” The biggest infrastructure shake-up in 2026 won’t come from outside but from inside. We are shifting from “human speed, low concurrency, predictable” traffic to “intelligent agent speed, recursive, explosive, massive” workloads. Current enterprise backends are designed for 1:1 “human actions to system responses.” They are not suited for handling a single “goal” trigger by an intelligent agent that initiates 5000 sub-tasks, database queries, and internal API calls in milliseconds. When an intelligent agent tries to refactor code or repair security logs, it isn’t like a user; for traditional databases or rate limiters, it looks more like a DDoS attack. Building systems for intelligent-agent workloads in 2026 requires redesigning control planes. “Agent-native (infrastructure” will begin to emerge. Next-generation systems must treat “herd effect” as the default state. Cold start times must be shortened, latency fluctuations must converge, and concurrency limits must increase by orders of magnitude. The real bottleneck will shift to coordination: routing, lock control, state management, and policy execution during large-scale parallel execution. Platforms that can survive the flood of tool calls will be the ultimate winners. Justine Moore: Creative tools will fully embrace multimodality We already have the basic building blocks for AI storytelling: generative sound, music, images, and videos. But as soon as content exceeds a short clip, achieving director-level control remains time-consuming, painful, and sometimes impossible. Why not let models accept a 30-second video, then create a new character using reference images and sounds provided by us, and continue shooting the same scene? Why not let models “re-shoot” from new angles or match movements to reference footage? 2026 will be the year of authentic multimodal AI creation. Users can feed any reference content to the model, collaboratively generate new works, or edit existing scenes. We’ve already seen initial products like Kling O1 and Runway Aleph, but this is just the beginning—both the model layer and application layer need new innovations. Content creation is one of AI’s “killer apps,” and I expect multiple successful products spanning from meme makers to Hollywood directors. Jason Cui: AI-native data stacks will continue to evolve Over the past year, the “modern data stack” has become noticeably more integrated. Data companies are moving from modular services like collection, transformation, and computation to bundled and unified platforms such as the Fivetran/dbt merger and Databricks’ expansion. Although the ecosystem has matured, we are still in the early stages of truly AI-native data architectures.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
a16z predicts the top four trends to be announced first in 2026
The breakthroughs in AI technology have shifted from model capabilities to system capabilities, with industry upgrades simultaneously shifting focus from isolated innovations to redefining infrastructure, workflows, and user interaction methods. Four investment teams at a16z provide key insights regarding 2026 from four dimensions: infrastructure, growth, healthcare, and interactive worlds. This article is based on a16z’s written piece, organized, translated, and authored by BlockBeats. (Previous highlight: a16z former partner’s major tech report: How AI Will Consume the World?) (Background supplement: a16z announces $10 billion new fund, focusing on AI, crypto finance, and defense technology) Summary: Over the past year, AI’s breakthroughs have shifted from model abilities to system abilities: understanding long sequences, maintaining consistency, executing complex tasks, and collaborating with other intelligent entities. As a result, industry upgrades have moved from point innovations to redefining infrastructure, workflows, and user interaction methods. In the annual “Big Ideas 2026,” four a16z investment teams offer key insights for 2026 from four dimensions: infrastructure, growth, healthcare, and interactive worlds. Essentially, they depict a trend: AI is no longer just a tool, but an environment, a system, and an agent acting alongside humans. Here are the four teams’ judgments on structural changes in 2026: As investors, our job is to delve into every corner of the tech industry, understand its dynamics, and judge the next evolution. Therefore, every December, we invite each investment team to share one “big idea” they believe tech entrepreneurs will tackle in the coming year. Today, we present perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Opinions from other teams will be published tomorrow—stay tuned. Infrastructure Team Jennifer Li: Startups will tame the “chaos” of multimodal data Unstructured, multimodal data has always been the biggest bottleneck for enterprises and the greatest untapped treasure. Every company is overwhelmed by PDFs, screenshots, videos, logs, emails, and various semi-structured “data sludges.” Models are becoming smarter, yet inputs are increasingly chaotic—leading to hallucinations in RAG systems, causing intelligent agents to err in subtle and costly ways, and still heavily relying on manual quality checks in critical workflows. Today, the real limiting factor for AI companies is data entropy: in a world where 80% of enterprise knowledge resides in unstructured formats, freshness, structure, and authenticity are continuously declining. Therefore, unlocking the “tangled mess” of unstructured data is becoming a startup opportunity for a generation. Enterprises need continuous methods to clean, structure, verify, and govern their multimodal data, enabling downstream AI workloads to truly perform. Use cases span everywhere: contract analysis, onboarding, claims processing, compliance, customer service, procurement, engineering retrieval, sales enablement, analytics pipelines, and all workflows relying on reliable context. Platforms capable of extracting structure from documents, images, and videos, harmonizing conflicts, repairing data pipelines, and maintaining data freshness and retrievability will hold the “key to enterprise knowledge and processes.” Joel de la Garza: AI will reshape the recruitment dilemma for cybersecurity teams Over the past decade, the biggest headache for CISOs has been hiring. From 2013 to 2021, the global cybersecurity job gap skyrocketed from less than 1 million to 3 million. The reason is that security teams require highly specialized technical talent but often assign them to exhausting frontline tasks, like log review, which almost no one wants to do. The deeper root problem is: security teams create this pain themselves. They buy “all-in-one detection” tools, forcing the team to “review everything”—which in turn creates a artificially “labor shortage,” forming a vicious cycle. By 2026, AI will break this cycle by automating most repetitive and redundant tasks, significantly narrowing the talent gap. Anyone who has worked in large security teams knows that half of the work can be fully automated; the problem is, when overwhelmed daily, you cannot step back to consider what should be automated. Truly AI-native tools will do this for security teams, allowing them to focus on what they want to do: tracking attackers, building systems, and fixing vulnerabilities. Malika Aubakirova: Intelligent-native infrastructure will become “standard” The biggest infrastructure shake-up in 2026 won’t come from outside but from inside. We are shifting from “human speed, low concurrency, predictable” traffic to “intelligent agent speed, recursive, explosive, massive” workloads. Current enterprise backends are designed for 1:1 “human actions to system responses.” They are not suited for handling a single “goal” trigger by an intelligent agent that initiates 5000 sub-tasks, database queries, and internal API calls in milliseconds. When an intelligent agent tries to refactor code or repair security logs, it isn’t like a user; for traditional databases or rate limiters, it looks more like a DDoS attack. Building systems for intelligent-agent workloads in 2026 requires redesigning control planes. “Agent-native (infrastructure” will begin to emerge. Next-generation systems must treat “herd effect” as the default state. Cold start times must be shortened, latency fluctuations must converge, and concurrency limits must increase by orders of magnitude. The real bottleneck will shift to coordination: routing, lock control, state management, and policy execution during large-scale parallel execution. Platforms that can survive the flood of tool calls will be the ultimate winners. Justine Moore: Creative tools will fully embrace multimodality We already have the basic building blocks for AI storytelling: generative sound, music, images, and videos. But as soon as content exceeds a short clip, achieving director-level control remains time-consuming, painful, and sometimes impossible. Why not let models accept a 30-second video, then create a new character using reference images and sounds provided by us, and continue shooting the same scene? Why not let models “re-shoot” from new angles or match movements to reference footage? 2026 will be the year of authentic multimodal AI creation. Users can feed any reference content to the model, collaboratively generate new works, or edit existing scenes. We’ve already seen initial products like Kling O1 and Runway Aleph, but this is just the beginning—both the model layer and application layer need new innovations. Content creation is one of AI’s “killer apps,” and I expect multiple successful products spanning from meme makers to Hollywood directors. Jason Cui: AI-native data stacks will continue to evolve Over the past year, the “modern data stack” has become noticeably more integrated. Data companies are moving from modular services like collection, transformation, and computation to bundled and unified platforms such as the Fivetran/dbt merger and Databricks’ expansion. Although the ecosystem has matured, we are still in the early stages of truly AI-native data architectures.