2025-11-08 –, Room 1
The Paranoid's Guide to Deploying Skynet's Interns
So, you've built an AI Agent. Congratulations! It's brilliant, autonomous, and probably a little bit terrifying. While we're all racing to build the next generation of intelligent applications, we're bolting them onto deployment architectures that treat them like any other legacy system or worse, blindly deploying them without a plan. This is a mistake, and it's going to get weird.
This talk presents a reference deployment architecture for AI Agent applications, starting with a quick primer on their core components: the Agents, the MCP servers, the Tools they access, and the Memory that gives them context. Then, we dive into the deep end of the security nightmare they represent.
We'll explore the messy reality of modern AI deployments:
-
A Tangled Web of Trust: Agents and MCPs are exposed to a chaotic mix of tools and services with wildly different levels of trust. How do you keep your high-security internal tool from being manipulated by an agent that just scraped a questionable Reddit thread?
-
Persistent Threats: The very nature of an Agent's memory means that attacks and threats can persist and evolve across sessions. A vulnerability exploited today could be a weapon wielded by the agent tomorrow.
-
Amplified Supply Chain Risks: Autonomous AI actions turn opaque, previously inaccessible components into active parts of your supply chain. This dramatically increases the attack surface, making vulnerabilities that were once theoretical suddenly very exploitable.
-
Compounding Complexity: The introduction of multi-agent communication protocols and centralized MCP servers adds layers of complexity that can obscure risk and reduce control when you need it most.
The core of this talk is a simple, radical recommendation: true, paranoid, and unapologetic isolation at every level of the AI Agent application stack. We'll argue that AI components are dynamic, untrusted supply chains and must be handled with the same (if not more) scrutiny as any other production system.
You will leave this session understanding why segmentation of components by trust level isn't just a good idea, but absolutely vital. We'll show you why you need more control over your MCP servers, not less, and provide a practical, defense-in-depth architecture for deploying AI Agents that won't turn on you.
While we're all racing to build the next generation of intelligent applications, we're bolting them onto deployment architectures that treat them like any other legacy system or worse, blindly deploying them without a plan. This is a mistake, and it's going to get weird. You will leave this session understanding why segmentation of components by trust level isn't just a good idea, but absolutely vital.
Dan Fernández is a product leader with over 13 years of experience at the intersection of machine learning and cybersecurity. He is currently building the future of secure computing at Edera, a startup revolutionizing cloud and AI infrastructure. Dan has a proven track record of launching new products and innovative solutions at companies like Chainguard, CrowdStrike, DomainTools and NICE Systems. Dan is also part-time faculty member at Georgia Tech's Masters of Cybersecurity program. In his free time he enjoys traveling and sampling street foods as well as learning about irrelevant food facts.