<?xml version='1.0' encoding='utf-8' ?>
<!-- Made with love by pretalx v2024.3.1. -->
<schedule>
    <generator name="pretalx" version="2024.3.1" />
    <version>0.7</version>
    <conference>
        <title>Cloud Native Rejekts NA (Atlanta) 2025</title>
        <acronym>cloud-native-rejekts-atlanta-na-atlanta-2025</acronym>
        <start>2025-11-08</start>
        <end>2025-11-08</end>
        <days>1</days>
        <timeslot_duration>00:05</timeslot_duration>
        <base_url>https://cfp.cloud-native.rejekts.io</base_url>
        <logo>https://cfp.cloud-native.rejekts.io/media/cloud-native-rejekts-atlanta-na-atlanta-2025/img/CNR_Logo_Default_Stacked_maMQPVR.png</logo>
        <time_zone_name>US/Eastern</time_zone_name>
        
        
    </conference>
    <day index='1' date='2025-11-08' start='2025-11-08T04:00:00-05:00' end='2025-11-09T03:59:00-05:00'>
        <room name='Crystal Dining Room' guid='42cacdf4-4e44-50a1-9e66-199f1fb3f6c7'>
            <event guid='0859339e-9e73-5417-a952-e39eb65f8a20' id='1550'>
                <room>Crystal Dining Room</room>
                <title>Why Is CI still Doing Your Promotions?</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T10:10:00-05:00</date>
                <start>10:10</start>
                <duration>00:30</duration>
                <abstract>As teams move from traditional CI/CD pipelines to GitOps tools like Argo CD, they often hit a common roadblock: how do you manage promotions across dev, staging, and production? Tools like Argo CD and Flux often leave a gap when it comes to multi-stage promotions. What used to be a simple approval click now involves juggling image tags, config changes, and pull requests across multiple repos. This shift often creates confusion, adds manual steps, and breaks the developer workflow.

Tools like &apos;GitOps Promoter&apos; by Argo offer a promising approach to this problem, but are still in their experimental phase, limiting their readiness for production. Other enterprise solutions offer robust features but come with licensing costs, which can be a barrier for teams.

In this talk, we&#8217;ll explore Kargo, a Kubernetes-native OSS tool for automating multi-stage promotions, and compare it with GitOps Promoter. We&#8217;ll walk through their design choices, strengths, and tradeoffs with a live demo so users can see how each tool handles this and choose the approach that best fits their GitOps workflow, without ever relying on custom scripts.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1550-why-is-ci-still-doing-your-promotions-</slug>
                <track></track>
                
                <persons>
                    <person id='714'>Nitish Kumar</person><person id='717'>Faeka Ansari</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/MRPMHL/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/MRPMHL/feedback/</feedback_url>
            </event>
            <event guid='31c4ae78-183c-5470-a1dc-8bed6f2ff8e8' id='1580'>
                <room>Crystal Dining Room</room>
                <title>The Paranoid&apos;s Guide to Deploying Skynet&apos;s Interns</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T11:10:00-05:00</date>
                <start>11:10</start>
                <duration>00:30</duration>
                <abstract># The Paranoid&apos;s Guide to Deploying Skynet&apos;s Interns

So, you&apos;ve built an AI Agent. Congratulations! It&apos;s brilliant, autonomous, and probably a little bit terrifying. While we&apos;re all racing to build the next generation of intelligent applications, we&apos;re bolting them onto deployment architectures that treat them like any other legacy system or worse, blindly deploying them without a plan. This is a mistake, and it&apos;s going to get weird.

This talk presents a reference deployment architecture for AI Agent applications, starting with a quick primer on their core components: the **Agents**, the **MCP servers**, the **Tools** they access, and the **Memory** that gives them context. Then, we dive into the deep end of the security nightmare they represent.

We&apos;ll explore the messy reality of modern AI deployments:

- **A Tangled Web of Trust:** Agents and MCPs are exposed to a chaotic mix of tools and services with wildly different levels of trust. How do you keep your high-security internal tool from being manipulated by an agent that just scraped a questionable Reddit thread?

- **Persistent Threats:** The very nature of an Agent&apos;s memory means that attacks and threats can persist and evolve across sessions. A vulnerability exploited today could be a weapon wielded by the agent tomorrow.

- **Amplified Supply Chain Risks:** Autonomous AI actions turn opaque, previously inaccessible components into active parts of your supply chain. This dramatically increases the attack surface, making vulnerabilities that were once theoretical suddenly very exploitable.

- **Compounding Complexity:** The introduction of multi-agent communication protocols and centralized MCP servers adds layers of complexity that can obscure risk and reduce control when you need it most.

The core of this talk is a simple, radical recommendation: **true, paranoid, and unapologetic isolation at every level of the AI Agent application stack.** We&apos;ll argue that AI components are dynamic, untrusted supply chains and must be handled with the same (if not more) scrutiny as any other production system.

You will leave this session understanding why segmentation of components by trust level isn&apos;t just a good idea, but absolutely vital. We&apos;ll show you why you need *more* control over your MCP servers, not less, and provide a practical, defense-in-depth architecture for deploying AI Agents that won&apos;t turn on you.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1580-the-paranoid-s-guide-to-deploying-skynet-s-interns</slug>
                <track></track>
                
                <persons>
                    <person id='1310'>Dan Fernandez</person>
                </persons>
                <language>en</language>
                <description>While we&apos;re all racing to build the next generation of intelligent applications, we&apos;re bolting them onto deployment architectures that treat them like any other legacy system or worse, blindly deploying them without a plan. This is a mistake, and it&apos;s going to get weird. You will leave this session understanding why segmentation of components by trust level isn&apos;t just a good idea, but absolutely vital.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/ZCZSMW/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/ZCZSMW/feedback/</feedback_url>
            </event>
            <event guid='ec7ff63e-93d8-5d65-8f20-b4dd65947d23' id='1508'>
                <room>Crystal Dining Room</room>
                <title>How to Use an AI Assistant with Your Monitoring System</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T11:45:00-05:00</date>
                <start>11:45</start>
                <duration>00:30</duration>
                <abstract>As observability systems grow more complex, the cognitive load on users increases quite fast. This talk presents an approach that could be game-changer in the future: using AI assistants as intelligent interfaces to your observability stack. By implementing and using MCP (Model Context Protocol) servers, we can transform how observability users interact with metrics, logs, and traces. You will see how teams can query their stack in plain English and use natural language to explore data, debug issues, and even work with configurations.
The session covers both theoretical foundations and practical implementation. It demonstrates how you can integrate AI assistants directly into your day-to-day workflows and provides a comprehensive walkthrough of:
- MCP architecture and how it enables LLMs (Large Language Models) to execute observability tasks
- Setting up and configuring MCP servers (demonstrated with VictoriaMetrics) and integration with popular AI assistants
- Current and planned features of VictoriaMetrics MCP Server
- Real-world use cases: data exploring, query explanation, working with alerting rules, cardinality analysis, intelligent debugging, obtaining context-rich answer for your questions, etc
- Various tips on how to make AI assistants work better with the observability stack
Whether you&apos;re an SRE looking to reduce toil, a platform engineer seeking to democratize monitoring access, or a leader evaluating AI&apos;s role in operations, this talk provides practical insights and tools for possible transformation of your observability practice.
This approach doesn&apos;t replace monitoring expertise at the moment &#8212; it amplifies it, making expert knowledge accessible to entire teams, giving you a powerful teammate in the form of AI assistant.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1508-how-to-use-an-ai-assistant-with-your-monitoring-system</slug>
                <track></track>
                
                <persons>
                    <person id='1007'>Mathias Palmersheim</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/PHPK7Y/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/PHPK7Y/feedback/</feedback_url>
            </event>
            <event guid='d168015c-bf3b-51b1-83e1-bd798602ef67' id='1588'>
                <room>Crystal Dining Room</room>
                <title>Building Trust in Every Artifact with SBOMs</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T14:00:00-05:00</date>
                <start>14:00</start>
                <duration>00:30</duration>
                <abstract>Software Bill of Materials (SBOMs) are no longer a nice-to-have; they&apos;re quickly becoming table stakes for secure software delivery. But generating SBOMs is just the start. How do you manage them at scale across thousands of artifacts, teams, and environments? How do you ensure they&#8217;re accurate, tamper-proof, and usable in real-world pipelines?

We will walk users through integrating SBOM generation, storage, and validation into a modern CI/CD workflow using cloud-native tooling.

- Best practices for generating SBOMs for containers
- Securely storing and indexing SBOMs alongside your artifacts
- Validating artifacts against SBOM data before deployment
- Using SBOMs in incident response, compliance, and auditing

The session will provide attendees a clear roadmap to make SBOMs a first-class citizen in their pipelines and will provide a real-world example of how Cloudsmith integrates CNCF projects like Trivy with OSS projects like CycloneDX, Syft and Grype for automated SBOM generation.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1588-building-trust-in-every-artifact-with-sboms</slug>
                <track></track>
                
                <persons>
                    <person id='1093'>Nigel Douglas</person><person id='1332'>Esteban Garcia</person>
                </persons>
                <language>en</language>
                <description>The talk will provide clear, actionable guidance for integrating SBOMs into real-world pipelines using cloud-native tooling, specifically cosign, kyverno, kubewarden.

As the cloud-native ecosystem continues to mature, supply chain security is becoming a critical concern - not just for security teams, but for developers and platform engineers as well.

By sharing practical techniques for generating, storing, and validating SBOMs, we would like to:

- Help teams improve the security and transparency of their build and release processes
- Encourage adoption of open standards like SPDX, CycloneDX, and in-toto
- Stressing why Trivy has become a standard for vulnerability scanning in cloud-native environments.
- Highlight the value of OCI-native approaches to artifact metadata, promoting registry-driven workflows
- Build an understanding of how DevOps and DevSecOps teams can respond more quickly and confidently to emerging threats
- Empower organizations to meet growing compliance demands without slowing down software delivery

SBOMs are a very powerful tool that is emerging but not fully used by development teams. We want to help expand their usage.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/YQGRTY/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/YQGRTY/feedback/</feedback_url>
            </event>
            <event guid='1371d998-6b23-514f-ad53-d9bdf58297da' id='1459'>
                <room>Crystal Dining Room</room>
                <title>Expanding Your Toolbox: Beginners Guide to Controlling Kubernetes Logs</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T14:35:00-05:00</date>
                <start>14:35</start>
                <duration>00:30</duration>
                <abstract>Drowning in logs from your Kubernetes clusters? Struggling to scale observability without overwhelming your telemetry systems? You&apos;re not alone&#8212;and there&apos;s a better way. In this talk, you&#8217;ll learn how to efficiently manage and streamline logging data from source to destination using telemetry pipelines.
We&#8217;ll walk through the key stages of a modern telemetry pipeline&#8212;collection, parsing, filtering, routing, and forwarding&#8212;demonstrating how to build powerful, flexible pipelines that can handle logs from any source to any destination. Along the way, you&#8217;ll see a live demo in a real Kubernetes environment, where we&#8217;ll deploy your first telemetry pipeline tailored to a real-world use case.
Whether you&apos;re debugging production issues, operating multi-tenant clusters, or just trying to cut through the noise, this session will give you the tools and patterns you need to simplify and scale log collection. Plus, you&#8217;ll get access to a self-paced, hands-on workshop to continue exploring after the session: o11y-workshops.gitlab.io/workshop-fluentbit.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1459-expanding-your-toolbox-beginners-guide-to-controlling-kubernetes-logs</slug>
                <track></track>
                
                <persons>
                    <person id='822'>Eric D. Schabell</person>
                </persons>
                <language>en</language>
                <description>This talk includes a 15-minute live demo showcasing key integration phases:

1. Deploying a real workload: Start with a Kubernetes cluster running a real application (a CMS) that generates log data.

2. Installing a telemetry pipeline: Deploy a telemetry pipeline to the cluster to begin streaming logs from all containers.

3. Streaming to an output: Route collected logs to an external destination for analysis or storage.

4. Optimizing log volume: Refine the pipeline to filter out noisy or unnecessary logs&#8212;reducing telemetry costs and improving signal-to-noise ratio.

5. Filtering logs: Enrich logs with metadata, isolate error-level telemetry, and ensure only the necessary logs (e.g., error logs from the CMS) are exported securely from the cluster.

Demo source: gitlab.com/o11y-workshops/logs-control-easy-install</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/BSVWYC/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/BSVWYC/feedback/</feedback_url>
            </event>
            <event guid='3cc28c8c-d2ed-5f76-b0bc-3ea2597b3cbe' id='1590'>
                <room>Crystal Dining Room</room>
                <title>VM-Class Secure, Millisecond-Fast Cloud-Native Apps With Hyperlight + Nanvix</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T15:10:00-05:00</date>
                <start>15:10</start>
                <duration>00:30</duration>
                <abstract>Kubernetes enables teams to deploy almost any workload without modification, but its boundaries are still defined by namespaces and cgroups. The presence of seven container-escape CVEs from 2022 to 2024 shows these boundaries can be breached. Full VMs or Kata Containers can restore security but suffer from multi-second cold starts and high memory usage, impacting latency-sensitive or densely packed clusters.
In this talk, we will explore a middle ground with Hyperlight, a CNCF virtual-machine monitor that boots micro-VMs, and Nanvix, an open-source Rust microkernel designed to keep guests small yet compatible. This combination allows unmodified Rust, Python, and Wasm services to start up in tens of milliseconds while maintaining VM-class isolation.
We will delve into the architecture, present head-to-head benchmarks, and conduct a live demo. By the end of the session, you will have a clear understanding of the trade-offs and a checklist for implementing micro-VM isolation.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1590-vm-class-secure-millisecond-fast-cloud-native-apps-with-hyperlight-nanvix</slug>
                <track></track>
                
                <persons>
                    <person id='546'>Danilo (Dan) Chiarlone</person><person id='1313'>Pedro Henrique Penna</person>
                </persons>
                <language>en</language>
                <description>&apos;Benefits to the ecosystem&apos; section of our KubeCon submission:
The integration of Hyperlight and Nanvix brings significant benefits to the cloud-native ecosystem by enabling applications to run with strong isolation in a virtualized sandbox environment, while simultaneously enhancing performance and workload density. This combination leverages the lightweight, Rust-based microkernel architecture of Nanvix and the fast, open-source VMM capabilities of Hyperlight, a CNCF project, to reduce cold start times and maintain language-level compatibility. Notably, Hyperlight+Nanvix can boot up apps in tens of milliseconds, providing rapid responsiveness for cloud-native services. Currently, Hyperlight+Nanvix supports popular programming languages such as Rust, Python, and Wasm, facilitating the acceleration of cloud-native deployments. Future plans include expanding support to additional languages like JavaScript and Go, as well as deeper integration with Kubernetes. This architecture not only improves resource efficiency but also unlocks new possibilities for container isolation through containerd shims, making it a versatile and forward-looking solution for modern cloud-native applications. Attendees of this talk will walk away with concrete insights and architectural guidance on how to speed up their cloud-native applications with Hyperlight+Nanvix.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/3JKGF7/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/3JKGF7/feedback/</feedback_url>
            </event>
            <event guid='c6775a95-7a5d-5a7d-8401-761d22b45e3f' id='1544'>
                <room>Crystal Dining Room</room>
                <title>Beyond the Default Scheduler: Navigating GPU Multitenancy in the AI Era</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T16:00:00-05:00</date>
                <start>16:00</start>
                <duration>00:30</duration>
                <abstract>GPU multitenancy in Kubernetes faces significant security challenges when deploying AI workloads on shared infrastructure. Time slicing enables GPU sharing but lacks hardware isolation, risking exposure of sensitive data. NVIDIA Multi-Instance GPU (MIG) provides true hardware isolation with dedicated compute cores, memory slices, and L2 cache partitions, ensuring consistent performance and strict QoS guarantees.

Since the default Kubernetes scheduler cannot partition GPU resources like CPUs for workloads, advanced schedulers&#8212;KAI, Volcano, and Kueue can serve as the scheduler for your workloads. They improve GPU sharing through hierarchical queues for secure multi-tenant environments. This talk demonstrates how combining isolation in multi-tenant setups with intelligent scheduling results in optimal utilization, fair resource distribution, and robust security boundaries, guiding the transition from default to GPU-aware scheduling solutions for scalable AI infrastructure.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1544-beyond-the-default-scheduler-navigating-gpu-multitenancy-in-the-ai-era</slug>
                <track></track>
                
                <persons>
                    <person id='506'>Shivay Lamba</person><person id='980'>Hrittik Roy</person><person id='1138'>Saiyam Pathak</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/GUBJVM/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/GUBJVM/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Theater' guid='526fde95-1593-521b-bf45-d23b7cdbf4ed'>
            <event guid='0a5bf929-664c-5e35-8791-3f3e5b06c5a9' id='1601'>
                <room>Theater</room>
                <title>Welcome to Cloud Native Rejekts NA 2025!</title>
                <subtitle></subtitle>
                <type>Opening/Sponsor Keynote Speech</type>
                <date>2025-11-08T09:30:00-05:00</date>
                <start>09:30</start>
                <duration>00:10</duration>
                <abstract>Opening remarks</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1601-welcome-to-cloud-native-rejekts-na-2025-</slug>
                <track></track>
                
                <persons>
                    <person id='203'>Ralph Squillace</person><person id='1324'>Jaiveer Katariya</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/VN9YBE/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/VN9YBE/feedback/</feedback_url>
            </event>
            <event guid='90a00bd7-5b8e-5902-9d24-8c48de8355d8' id='1462'>
                <room>Theater</room>
                <title>Catch Me If You Can: A Kubernetes Escape Story</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T09:40:00-05:00</date>
                <start>09:40</start>
                <duration>00:30</duration>
                <abstract>Leonardo DiCaprio made it look glamorous, but real-world container escapes are less Hollywood and more chaotic. Still, the parallels are striking. Like Frank Abagnale slipping past the guards at an Atlanta prison, modern attackers escape containers not with brute force but with clever misdirection: exploiting weak isolation, abusing misconfigured permissions, and sidestepping detection.

In this talk, we&#8217;ll trace the path of a container breakout&#8212;from the initial escape to lateral movement across a Kubernetes cluster. We&#8217;ll walk through the attack step by step (yep, there&#8217;s a demo), then flip the perspective to show how modern defenses shut it down.

We&#8217;ll cover:
- How container escapes actually happen in the wild
- What user namespaces in Kubernetes 1.33 bring to the table
- How to achieve multi-tenancy workload isolation
- How to detect breakout attempts before they go full clusterf*ck

Whether you&apos;re a platform engineer, security lead, or just into a good cat-and-mouse chase through the control plane, you&#8217;ll leave with real-world tactics for keeping your cluster escape-proof.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1462-catch-me-if-you-can-a-kubernetes-escape-story</slug>
                <track></track>
                
                <persons>
                    <person id='1179'>Jed Salazar</person><person id='1330'>James Petersen</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/MEJ3TC/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/MEJ3TC/feedback/</feedback_url>
            </event>
            <event guid='4fec0961-b1a9-593b-890d-bab51790127f' id='1512'>
                <room>Theater</room>
                <title>Migrating Bloomberg&apos;s Internal Private Cloud From NGINX to the World of Istio</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T10:10:00-05:00</date>
                <start>10:10</start>
                <duration>00:30</duration>
                <abstract>Migrating from a traditional ingress controller to a service mesh-based solution in a live environment with thousands of internal users presents significant challenges. In this session, we share Bloomberg&apos;s experience transitioning from NGINX to Istio as the ingress layer for our internal private cloud platform&#8212;a managed service supporting application deployments across the firm. We explore the motivations behind this shift, the architectural and operational changes implemented, and the hurdles encountered during the migration process.

Our journey offers practical insights into planning and executing such a migration with minimal disruption, while also highlighting the new capabilities unlocked through Istio. Attendees will benefit from our lessons learned, best practices, and retrospective advice aimed at helping other engineering teams undertake similar transitions with greater confidence and fewer surprises.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1512-migrating-bloomberg-s-internal-private-cloud-from-nginx-to-the-world-of-istio</slug>
                <track></track>
                
                <persons>
                    <person id='1284'>Kavya Elchuri</person><person id='1326'>Sahil Thandra</person>
                </persons>
                <language>en</language>
                <description>Migrating from a traditional ingress controller in a live environment with thousands of internal users can be challenging &#8212; but it doesn&#8217;t have to be. In this session, we&#8217;ll share our journey transitioning from NGINX to Istio as an ingress solution for Bloomberg&apos;s internal private cloud platform that is provided to the firm&apos;s engineers as a managed service for application deployment. We&#8217;ll discuss what motivated the change, the architectural and operational adjustments we made, the challenges we faced during the migration, and the benefits we achieved post-deployment. Attendees will gain practical insights and best practices for adopting Istio in production environments, including how to plan and execute a migration with minimal disruption -- while unlocking new capabilities.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/8LRE7M/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/8LRE7M/feedback/</feedback_url>
            </event>
            <event guid='0f04fe42-7d25-5845-a0ca-793193fec154' id='1554'>
                <room>Theater</room>
                <title>Ease The Move From DevOps to MLOps: A Case For ModelSpec + KitOps</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T11:10:00-05:00</date>
                <start>11:10</start>
                <duration>00:30</duration>
                <abstract>What does operational overhead look like in the era of MLOps? If you&apos;re grappling with this question, like many others, and would like a way to apply the paradigm of containers and cloud native to AI workloads &#8213; you&apos;re in luck. 
There is an effort underway to align AI workloads with the knowledge we have of operational excellence in cloud native. The CNCF Sandbox project ModelSpec brings much needed clarity to MLOps workflows. It provides the right abstraction to be able to define how DevOps and cloud native practices can be applied for machine learning operations. 
APplying the ModelSpec is the KitOps tool. It helps bridge the gaps that currently exist in the tooling space for MLOps. It creates a &quot;Docker&quot;-like interface for AI workloads and makes it easy and efficient to work with models on Kubernetes (or other container runtimes).
In this talk, I aim to bring together the ML overhead, how cloud native paradigms can help, the ModelSpec, and KitOps. Together, all these will help expose an important painpoint in productionalizing AI in the workplace. Let&apos;s eliminate all the disconnected ways in which data teams, developers, and operations folks are working by using the principles that will be highlighted during this talk.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1554-ease-the-move-from-devops-to-mlops-a-case-for-modelspec-kitops</slug>
                <track></track>
                
                <persons>
                    <person id='733'>Ram Iyengar</person>
                </persons>
                <language>en</language>
                <description>- Understand MLOps Challenges: Learn to identify the key operational hurdles in deploying and managing AI/ML models.
- Discover a New Solution: Get introduced to ModelSpec and KitOps, a practical framework for streamlining MLOps.
- Apply DevOps Principles to AI: Find out how to use familiar cloud-native concepts to manage AI workloads efficiently.
- Improve Team Collaboration: Learn how to bridge the gap between your data science and operations teams with a unified workflow.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/QKCFNE/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/QKCFNE/feedback/</feedback_url>
            </event>
            <event guid='b43419ca-4d5c-5343-a09b-39926682af8a' id='1581'>
                <room>Theater</room>
                <title>Make your Developer&apos;s Pains go Away, with the Right Level of Abstraction for your Platform</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T11:45:00-05:00</date>
                <start>11:45</start>
                <duration>00:30</duration>
                <abstract>Developers don&#8217;t code eight hours a day. They code one &#8212; and fight with TicketOps, Infrastructure dependencies and Security blockers the rest of the time. Many platform teams build Internal Developer Platforms (IDPs) to help, but poor abstraction choices make things worse. In this talk, we&#8217;ll share a battle-tested approach to building the right level of abstraction on top of Kubernetes using Score and Kro.

You&#8217;ll learn how to go beyond templating, reduce cognitive load, and deliver a developer experience that people actually want to use. We&#8217;ll demo how developers can deploy secure, production-grade workloads by just focusing on their applications to bring value to their end users &#8212; while the platform handles the hard parts behind the scenes.

This talk isn&#8217;t about Kubernetes and GitOps. It&#8217;s about empathy. It&#8217;s about platforms people adopt, not abandon.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1581-make-your-developer-s-pains-go-away-with-the-right-level-of-abstraction-for-your-platform</slug>
                <track></track>
                
                <persons>
                    <person id='1089'>Artem Lajko</person><person id='1041'>Mathieu Benoit</person>
                </persons>
                <language>en</language>
                <description>Many platform teams build Internal Developer Platforms (IDPs) to make developers&#8217; lives easier. But the wrong abstraction choices often have the opposite effect: developers get stuck wrestling with endless YAML files, manually navigating security checklists, and troubleshooting infrastructure instead of writing code.

In this session, we&#8217;ll share a proven approach to avoiding that trap &#8212; choosing the right level of abstraction for your Kubernetes-based platform. Using the open source tools Score and Kro, we&#8217;ll walk through a live demo showing how to fully automate complex infrastructure and security requirements behind the scenes, so developers never have to think about them.

You&#8217;ll learn how to:

- Minimize cognitive load and boost the &#8220;developer joy&#8221; factor.
- Combine GitOps workflows with platform automation for smooth deployments.
- Enable production-grade deployments with minimal developer effort.

The goal: platforms that teams actually want to use &#8212; because they solve real problems instead of creating new ones.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/X3LUL3/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/X3LUL3/feedback/</feedback_url>
            </event>
            <event guid='b2f960a6-8250-5078-ad53-aa3b9763aa52' id='1499'>
                <room>Theater</room>
                <title>Building Kubernetes AI Agents at Scale: Generating Synthetic Training Data for Autonomous Operation</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T14:00:00-05:00</date>
                <start>14:00</start>
                <duration>00:30</duration>
                <abstract>AI agents for Kubernetes automation fail because they&apos;re trained on unrealistic, simplified scenarios. Unfortunately, there is a dearth of such training data available, as most companies are reticent to publicly share cluster operations data. Moreover, even existing data from, e.g., Google or Alibaba, is not representative of usage patterns seen in smaller organizations. In this talk, we will demonstrate how to use a small &#8220;seed&#8221; of real, production data from existing Kubernetes clusters to generate a large set of representative, synthetic training data for Kubernetes AI agents. We use graph-theoritic and statistical methods to generate a diverse set of training data covering failure modes, scaling events, resource contention problems, and other common scenarios found in production systems. These techniques, based on research from a team at Harvey Mudd College, allow AI Kubernetes Agents to be trained on high-quality data that is tailored to your company&#8217;s production infrastructure.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1499-building-kubernetes-ai-agents-at-scale-generating-synthetic-training-data-for-autonomous-operation</slug>
                <track></track>
                
                <persons>
                    <person id='984'>David Morrison</person>
                </persons>
                <language>en</language>
                <description>The modern AI industry is evolving at a break-neck pace; as new techniques and models become available, the rapid (re)-training of AI agents is critical for companies to remain competitive.  Moreover, doing this training in a cost-effective manner provides these companies with a longer runway to get their products to market.  Lastly, while AI agents trained to manage Kubernetes can often solve simple problems on small clusters, they have thus far have failed to work in large, general-purpose clusters like those seen in many companies&#8217; production infrastructure.

In this talk, users will learn how they can build a custom, personalized set of training data for AI Kubernetes agents, based on a relatively small amount of initial data.  This capability will enable them to stay competitive in a rapidly-changing ecosystem, while keeping costs under control.  We will also provide users with an easy-to-use &#8220;sandbox&#8221; training environment where agents can interact with the Kubernetes API and observe the effects of these interactions on the training data.

This work was done in collaboration with a team of researchers at Harvey Mudd College, and will additionally benefit the ecosystem by facilitating the flow of knowledge from the academic community into industry.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/EBZREM/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/EBZREM/feedback/</feedback_url>
            </event>
            <event guid='b664a120-2844-5fbe-9a6a-a877f29c49e5' id='1463'>
                <room>Theater</room>
                <title>Brewed for Scale: How Homebrew Virtualized macOS DevOps with Kubernetes</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T14:35:00-05:00</date>
                <start>14:35</start>
                <duration>00:30</duration>
                <abstract>Many FOSS project maintainers are operating extensive CI systems to ensure quality, stability, and rapid delivery of their software. Homebrew, the package manager beloved by macOS developers, is one such project. In this session, we&#8217;ll dive into the evolution of Homebrew&#8217;s CI pipelines for pull request validations, integration testing, and full regression tests for releases.

Each tier of CI and test automation comes with its own unique challenges. With a variety of pull requests coming in across the Homebrew and Workbrew repositories, CI pipelines need to be fast and efficient. While a pull request may look simple on the surface, complexity often arises in the testing phase, as a modification may need to be tested against everything that runs on a particular package. We&#8217;ll explore how Homebrew balances scalability and reliability across its CI landscape by utilizing open source virtualization and orchestration technology tailored to developers on macOS.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1463-brewed-for-scale-how-homebrew-virtualized-macos-devops-with-kubernetes</slug>
                <track></track>
                
                <persons>
                    <person id='1265'>Rin Oliver</person><person id='1266'>Brandon Valentine</person>
                </persons>
                <language>en</language>
                <description>Since 2019, Homebrew has used a macOS native orchestration and CI solution that brought scalability, virtualization, resource flexibility, and workload customization to its cloud environment. When you have the intensive demand Homebrew does from downloading upstream source code and building binary packages to support multiple versions of macOS across both Intel and Apple Silicon architecture, the ability to scale and customize is critical.

This work benefits the entire macOS ecosystem, as faster CI cycles mean quicker delivery of up-to-date software to millions of Homebrew users. Optimized infrastructure gives contributors and maintainers of the project the power to iterate with confidence, regardless of their local development setup. By sharing these strategies, we aim to inspire other projects facing similar macOS CI challenges to adopt their own sustainable, scalable best practices.

This presentation will appeal to a variety of audiences including platform engineers, CI/CD architects, SREs, and FOSS maintainers who are interested in reproducible builds, improving or scaling macOS CI workloads, or designing pipelines that fit the needs of a high-volume, fast-paced open source project.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/X3YHRS/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/X3YHRS/feedback/</feedback_url>
            </event>
            <event guid='6e6c6923-85e2-5b5f-be43-cf5ab6105d21' id='1474'>
                <room>Theater</room>
                <title>In-SPIRE-ing Identity: Using SPIRE for verifiable container isolation</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T15:10:00-05:00</date>
                <start>15:10</start>
                <duration>00:30</duration>
                <abstract>Edera leveraged SPIRE for cryptographically attestation of a workload&#8217;s environment. We started with a question: how do we prove that workloads are running in an isolated environment? It turns out that this is very similar to the workload identity question already answered by SPIFFE/SPIRE. By integrating SPIRE, Edera&#8217;s users are able to prove that workloads are running in a fully isolated Edera zone and get end-to-end encryption between these workloads, allowing for use cases like non-falisifiable build provenance and remote attestation.

In this talk, we will discuss workload identity and the SPIFFE specification, explaining how workload identity enabled us to build a hypervisor-based, verifiable identity system for isolated workloads. We will talk about lessons learned when deploying SPIRE, walk through some of our configuration choices, and give some tips to others looking to use this project.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1474-in-spire-ing-identity-using-spire-for-verifiable-container-isolation</slug>
                <track></track>
                
                <persons>
                    <person id='1272'>Marina Moore</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/8JVWUX/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/8JVWUX/feedback/</feedback_url>
            </event>
            <event guid='91eaf2f6-c8f6-5d2e-84c0-6f04c174d75b' id='1457'>
                <room>Theater</room>
                <title>Managing Millions of Storage Volumes at Scale: Inside DigitalOcean&apos;s ArgoCD Strategy</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T16:00:00-05:00</date>
                <start>16:00</start>
                <duration>00:30</duration>
                <abstract>When you&apos;re managing millions of storage volumes across 13 regions, traditional deployment approaches break down. At DigitalOcean, we transformed our Storage Platform operations using ArgoCD to bring sanity to complexity.

In this talk, we&apos;ll share how DigitalOcean&apos;s Storage Platform team turned our deployment process into a GitOps-powered engine using ArgoCD. We&apos;ll take you behind the scenes of operating our Storage Kubernetes platform, StorK8s, our storage orchestration platform that powers millions of volumes across DigitalOcean&apos;s global infrastructure.

You&#8217;ll learn:

1. How we architected a single ArgoCD instance to manage 13+ clusters across 13 regions while maintaining sub-5-minute deployment times.
2. Real-world canary and blue-green deployment patterns for stateful workloads.
3. Why centralised GitOps beats federation for our use case (and when you shouldn&apos;t follow our lead)

We&#8217;ll share what worked, what didn&#8217;t, and secret ingredients that helped us scale GitOps reliably.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1457-managing-millions-of-storage-volumes-at-scale-inside-digitalocean-s-argocd-strategy</slug>
                <track></track>
                
                <persons>
                    <person id='1038'>Yash Sharma</person><person id='1263'>Nikhil Pathak</person>
                </persons>
                <language>en</language>
                <description>This talk gives the community a real-world end user story, large-scale example of GitOps in action using ArgoCD to manage complex, stateful workloads.

By sharing lessons learned from operating across 13+ regions, we&#8217;ll show how CNCF open source tools can handle high-stakes infrastructure without compromising velocity or safety, through the lens of how DigitalOcean&#8217;s Storage Platform, StorK8s, powers millions of volumes globally.

DigitalOcean has been running storage platform AKA StorK8s for the past 10+ years, and in this talk storage team senior engineer Nikhil and dev advocate Yash will share learnings with hands on practical patterns, deployment strategies, and tooling ideas on GitHub that attendees can apply to their own environments, helping push GitOps adoption forward across the open source and CNCF ecosystem.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/H7EYBC/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/H7EYBC/feedback/</feedback_url>
            </event>
            <event guid='b3d05d9d-2161-5106-b561-6a09906021b5' id='1541'>
                <room>Theater</room>
                <title>Truly Portable Code: Serverless WebAssembly in a Distributed World</title>
                <subtitle></subtitle>
                <type>Talk</type>
                <date>2025-11-08T16:35:00-05:00</date>
                <start>16:35</start>
                <duration>00:30</duration>
                <abstract>What if you could build serverless applications that cold-start in under a millisecond, run anywhere&#8212;from your laptop to Kubernetes to the edge&#8212;and require no changes to move between environments? This talk introduces Spin, a CNCF open-source WebAssembly (Wasm) developer toolkit designed for performance, portability, and simplicity.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1541-truly-portable-code-serverless-webassembly-in-a-distributed-world</slug>
                <track></track>
                
                <persons>
                    <person id='778'>Caleb Schoepp</person>
                </persons>
                <language>en</language>
                <description>What if you could build serverless applications that cold-start in under a millisecond, run anywhere&#8212;from your laptop to Kubernetes to the edge&#8212;and require no changes to move between environments? This talk introduces Spin, a CNCF open-source WebAssembly (Wasm) developer toolkit designed for performance, portability, and simplicity.
Attendees will learn how to build a Spin app, write polyglot WebAssembly functions with sub-millisecond cold starts, and run them locally using the Spin CLI. The same app will then be deployed to Azure Kubernetes Service with SpinKube, the open-source Spin runtime for Kubernetes, and to Fermyon Wasm Functions, Akamai&#8217;s multi-tenant, globally distributed PaaS &#8212; all without rewriting or cross-compilation.
The talk shows how WebAssembly and Spin enable true portability across the compute continuum, letting developers build once and run anywhere with no vendor lock-in. This talk demonstrates how Spin is reshaping what serverless can be.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/YPWVLM/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/YPWVLM/feedback/</feedback_url>
            </event>
            <event guid='6f6a045e-285f-5fd3-8101-2e51e3f7bc4f' id='1565'>
                <room>Theater</room>
                <title>Beyond YAML: Building Platform APIs with kro</title>
                <subtitle></subtitle>
                <type>Lightning Talk</type>
                <date>2025-11-08T17:10:00-05:00</date>
                <start>17:10</start>
                <duration>00:05</duration>
                <abstract>Kube Resource Orchestrator (kro) has been steadily gaining traction as a Kubernetes-native way to build higher-level abstractions for platform engineering. Kro enables platform teams to create Platform APIs that bundle multiple Kubernetes and cloud resources into a single, self-service interface.

At its core, kro uses a ResourceGraphDefinition to define the components, their dependencies, and how they should be deployed. This eliminates sprawling YAML files, automates ordering, and lets application teams consume infrastructure without wrestling with raw Kubernetes manifests.

In this lightning talk, I&#8217;ll show:

1. What a Platform API built with kro looks like.
1. How it compares to tools like Crossplane compositions and Helm.
1. Where kro fits in your platform engineering roadmap.

In just 5 minutes, you&#8217;ll see how this approach can make your platform APIs higher-level&#8212;and your delivery pipelines faster.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1565-beyond-yaml-building-platform-apis-with-kro</slug>
                <track></track>
                
                <persons>
                    <person id='770'>Engin Diri</person>
                </persons>
                <language>en</language>
                <description>If you&#8217;re a platform engineer, DevOps practitioner, or Kubernetes enthusiast who&#8217;s tired of juggling endless YAML files, this lightning talk is for you. We&#8217;ll explore how Kube Resource Orchestrator (kro) turns complex deployments into clean, reusable Platform APIs that app teams can use without touching low-level manifests. In just 5 minutes, you&#8217;ll learn what kro is, how it works, and why it might be the missing abstraction in your platform engineering toolkit.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/M3TVS3/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/M3TVS3/feedback/</feedback_url>
            </event>
            <event guid='4c990071-b8a1-58a1-9b25-f7dfda0c6374' id='1563'>
                <room>Theater</room>
                <title>The Dragonfly 5-Minute Formula for Speedy Container Delivery</title>
                <subtitle></subtitle>
                <type>Lightning Talk</type>
                <date>2025-11-08T17:15:00-05:00</date>
                <start>17:15</start>
                <duration>00:05</duration>
                <abstract>Ever wondered how to speed up your Kubernetes container delivery and get results in the blink of an eye? Look no further! In this action-packed 5-minute session, you will experience the magic of Dragonfly, the ultimate tool for accelerating container delivery in Kubernetes that slashes delivery times, boosts efficiency and ensures lightning-fast container distribution across your infrastructure.

Whether you&apos;re looking to optimize deployment speed or just curious about how to supercharge your container workflow, this talk is for you.

This talk covers how to:
1. Integrate Dragonfly with Kubernetes, ArgoCD, and other tools fromthe CNCF landscape for seamless container delivery.
2. Unlock the magic behind Dragonfly&#8217;s peer-to-peer container distribution.
3. Real-world examples of using Dragonfly to accelerate deployments in Kubernetes.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1563-the-dragonfly-5-minute-formula-for-speedy-container-delivery</slug>
                <track></track>
                
                <persons>
                    <person id='990'>Aditya Soni</person><person id='980'>Hrittik Roy</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/3FQ7NM/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/3FQ7NM/feedback/</feedback_url>
            </event>
            <event guid='f5d96082-6635-589e-93b9-1548aec687c2' id='1468'>
                <room>Theater</room>
                <title>From Messy to Modular: Fixing Multi-Tenant Ingress with Gateway API&apos;s XListenerSet</title>
                <subtitle></subtitle>
                <type>Lightning Talk</type>
                <date>2025-11-08T17:20:00-05:00</date>
                <start>17:20</start>
                <duration>00:05</duration>
                <abstract>Kubernetes Ingress doesn&#8217;t scale well in multi-tenant clusters, especially when teams need to share ports or protocols.

In this talk, I&#8217;ll show how the experimental XListenerSet in Gateway API solves that.
Using a real use case, I&#8217;ll walk through how it lets different teams define their own listeners safely, without stepping on each other.

If you&apos;re managing shared clusters and fighting with ingress conflicts, this is five minutes that could save you hours.</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1468-from-messy-to-modular-fixing-multi-tenant-ingress-with-gateway-api-s-xlistenerset</slug>
                <track></track>
                
                <persons>
                    <person id='1127'>Mengin Nicolas</person>
                </persons>
                <language>en</language>
                <description>Managing Ingress in multi-tenant Kubernetes clusters is complex and error-prone, especially when teams need to share ports or protocols.

The new XListenerSet extension in Gateway API provides a clean way to delegate listener configuration safely.

In this talk, I&#8217;ll present a real-world use case where XListenerSet resolved conflicts and improved isolation between teams.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/9RUPBC/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/9RUPBC/feedback/</feedback_url>
            </event>
            <event guid='c1602747-3300-55ea-ad3d-d275cdfe042b' id='1602'>
                <room>Theater</room>
                <title>More Lightning Talks - Signup sheet at registration</title>
                <subtitle></subtitle>
                <type>More Lightning Talks - Sign up sheet at registration (limited slots available)</type>
                <date>2025-11-08T17:25:00-05:00</date>
                <start>17:25</start>
                <duration>00:50</duration>
                <abstract>5 mins talks from attendees - sign up sheet at registration (limited slots available)</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1602-more-lightning-talks-signup-sheet-at-registration</slug>
                <track></track>
                
                <persons>
                    
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/PQWWSW/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/PQWWSW/feedback/</feedback_url>
            </event>
            <event guid='69e64952-f397-50bb-9857-ab07ca31b274' id='1603'>
                <room>Theater</room>
                <title>Closing Keynote</title>
                <subtitle></subtitle>
                <type>Lightning Talk</type>
                <date>2025-11-08T18:15:00-05:00</date>
                <start>18:15</start>
                <duration>00:05</duration>
                <abstract>Closing out Rejekts NA 2025 in Atlanta, GA</abstract>
                <slug>cloud-native-rejekts-atlanta-na-atlanta-2025-1603-closing-keynote</slug>
                <track></track>
                
                <persons>
                    <person id='131'>Duffie Cooley</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/VS7TPL/</url>
                <feedback_url>https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-atlanta-na-atlanta-2025/talk/VS7TPL/feedback/</feedback_url>
            </event>
            
        </room>
        
    </day>
    
</schedule>
