Root.io

From Cloud to Code to AI: A New Era of Security at RSA 2025

For years, RSA Conference has been dominated by themes like cloud security, identity management, and zero trust architectures. But in 2025, something fundamental has shifted. The conversation has moved from securing infrastructure to securing intelligence—from static threat models to adaptive, autonomous systems.

In short: AI security is no longer a subtopic. It IS the topic.

What started as isolated experiments with LLMs and AI copilots has rapidly evolved into something much more powerful—and potentially dangerous. AI is being embedded into core workflows, agents are being deployed at scale, and new protocols are emerging to support machine-to-machine communication. These agentic AI systems now operate with increasing autonomy, making decisions and taking actions without constant human supervision. But with this evolution comes a wave of new security questions: How do we trust autonomous systems? How do we secure non-human identities? How do we build protections before the next attack vector goes mainstream?

As we head to the RSA Conference next week in San Francisco, we’re watching five areas where AI and automation are upending the traditional security playbook. What began as isolated experiments with AI has evolved into an ecosystem where autonomous systems are reshaping every aspect of security operations. Here are my thoughts on some of the new patterns, new risks, and entirely new categories of tooling we’ll see this year:

1. AI-Powered Remediation Isn’t the Future—It’s Already Happening

In a world of constant CVEs, human-in-the-loop remediation is starting to look like a bottleneck. AI agents are now capable of not just flagging issues but fixing them—generating patches, validating them, and even opening PRs automatically. What used to take days now happens in minutes. And this shift isn’t just transforming container security—it’s redefining how we handle vulnerabilities across application code, open source packages, and cloud infrastructure. We’re only at the beginning.

As LLMs become more specialized, and AI-driven systems like RAG (Retrieval-Augmented Generation) gain traction, expect remediation to become not just faster—but smarter, more contextual, and deeply integrated into the way software is built and maintained. These systems now function as collaborative security partners, understanding both code semantics and security implications to create safe, appropriate fixes. Root is excited to be at the forefront of this evolution.

2. Secure-by-Design Is Finally Becoming the Default

The secure-by-design movement marks a fundamental shift in how teams think about security—not as a gate or a checklist, but as a core design constraint, just like performance, scalability, or cost. Security is now understood to be a shared responsibility across engineering, product, and platform teams.

This change isn’t just cultural—it’s deeply practical. The rise of containers has redefined how software is built and deployed, enabling faster releases, more modular systems, and cloud-native scale. But that speed also introduces new risk. Containers may be lightweight by design, but they’ve become heavyweight when it comes to security exposure.

That’s why secure-by-design can’t be aspirational—it has to be operational. It means integrating security into every stage of the software lifecycle: from early design decisions to runtime environments. And the tooling is finally catching up.

Platforms that automate container security at scale make it possible to continuously identify, patch, and remediate vulnerabilities—without breaking developer workflows or slowing down delivery. By aligning with the way modern teams already build, test, and ship code, these solutions help reduce cognitive load, eliminate manual backlog, and ensure that security is not just built-in, but built for the pace of today’s cloud-native development.

3. Securing the Model Context Protocol (MCP): The New API Surface

As autonomous agents proliferate, so does the need for them to communicate securely and with context. Enter the Model Context Protocol (MCP): a new standard for how AI-native systems share state, intent, and actions. MCP is quickly becoming foundational to the emerging agent ecosystem, allowing models to collaborate, hand off tasks, and coordinate actions dynamically. But it also introduces a novel and largely uncharted attack surface—one that challenges many of the assumptions we’ve relied on in traditional API security.

Early signs of abuse are already surfacing: prompt injection attacks that target agent-to-agent chains, model impersonation risks where a malicious agent mimics another’s identity, and unintended behavior cascades resulting from poorly scoped context sharing. These threats don’t just compromise single interactions—they can propagate downstream across entire agent workflows.

In response, the industry is beginning to explore new protection models. Identity-bound tokens for agents, runtime context validation, and secure graph traversal limits are some of the early ideas gaining traction. But this space is nascent, and there are no widely adopted best practices—yet.

4. The Agent Era: Threats of Automation That Thinks for Itself

MCP isn’t the only emerging attack surface in this new landscape. The rise of autonomous agents introduces an entirely new security paradigm—one where automation doesn’t just execute instructions, but makes decisions in real time. These agents are already proving their value across the security ecosystem: ingesting threat intel, correlating logs, triaging alerts, and even initiating mitigation workflows. They’re reducing human toil, accelerating response times, and operating at a scale traditional SOCs could never reach.

But with that power comes an equally large responsibility. Agents operate with autonomy, and autonomy without oversight can quickly become a liability. We’re entering a world where misconfigurations, hallucinated outputs, or compromised instructions don’t just result in false positives—they can lead to real actions with real consequences.

New questions are emerging fast:

  • How do we define trust boundaries between agents?
  • How do we ensure decisions are observable, reversible, and safe?
  • What guardrails prevent an agent from going rogue—or being tricked into doing so?

This introduces risk patterns we’ve never had to model before: cascading logic failures, context poisoning, or subtle prompt injection that results in irreversible actions.

5. Identity and Access for Non-Human Actors Is the New IAM Frontier

The rise of autonomous agents naturally leads to one of the most urgent and complex challenges facing security teams today: identity and access management in a world where machines—and not humans—are making decisions, initiating actions, and accessing systems. Traditional IAM frameworks were designed around human users, roles, and credentials. But in today’s agent-driven environments, that model no longer holds.

Agents don’t log in. They don’t follow ticket-based workflows. They operate continuously, interact with APIs, and communicate with other agents and services in real time—often across organizational and trust boundaries. This shift introduces new threats: agents impersonating one another, leaking credentials through poisoned memory states, or escalating privileges via indirect prompt manipulation. It’s not just a theoretical risk—it’s a structural one.

And this risk doesn’t stay confined to one layer. It cuts across every domain—application security, cloud infrastructure, SaaS integrations, supply chains. If an agent has the ability to spin up infrastructure, modify access controls, or push code, a single compromised identity could have a blast radius far beyond what we’re used to in human IAM breaches.

Security at the Speed of Intelligence

The shift to AI-native infrastructure isn’t incremental—it’s foundational. From remediation to identity, from runtime behavior to inter-agent communication, every layer of the security stack is being rewritten in real time. The questions surfacing now—about trust, autonomy, access, and resilience—aren’t edge cases. They’re the new baseline.

As AI agents and automation reshape how software is built, deployed, and defended, security teams are being asked to rethink everything—from the tools they use to the principles they rely on. It’s not just about keeping up with change—it’s about securing the systems that drive change.

At Root, we’re not just watching this shift—we’re building for it. We believe container security is the foundation of modern software velocity, and that automation isn’t a shortcut—it’s the only way to keep security aligned with how fast the world now moves.

We’ll be on the ground at RSAC next week, meeting with security leaders, engineers, and innovators who are navigating this new era. If you’re thinking about what comes next—how to secure agents, automate remediation, or re-architect your security posture for an AI-driven world—let’s talk.

Come find us. We’re ready for what’s next.

root.io