Doors and registration desk open to receive attendees.
We look to welcome and provide information to all attendees.
I will demo how to use different BPF tools in the Kubernetes developer workflow. Then, I will explain how it works and what support it requires from the Kubernetes installation.
Have you ever been asked the question - “How do we make sure Kubernetes resources conform to our internal policies and procedures?”. In this session we introduce, how you can audit, validate and mutate Kubernetes resources based custom semantic rules during create, update, and delete operations without recompiling or reconfiguring the Kubernetes API server using Gatekeeper - a policy controller for Kubernetes.
I will explain the overview of the architecture around Firecracker and container runtimes. Then I will show demos for a proof-of-concept implementation.
Because the kernel knows more than your programs.
I will describe a GitOps based deployment workflow that makes your Kubernetes deployments secure, auditable, and simpler and the process the process and tools you need to put it in place on any cloud.
Setting up a CI pipeline for Kubernetes distribution environment can be a daunting - and possibly costly - task, especially when you need to run tests for every change in a distribution focused on high-availability.
In this talk, I will explain how we built a CI system at Kinvolk based on Concourse CI, where we spin up multiple, parallel multi-node Kubernetes clusters in isolation on a single bare metal machine, in order to automatically run tests for bare-metal Kubernetes deployments. I will discuss the architectural and design choices, the tools that were used, the challenges we faced, and how we addressed them.
Observing Kubernetes clusters at scale can be challenging. While most companies operate a small number of Kubernetes clusters, Giant Swarm is responsible for hundreds. This scale makes maintaining a responsible level of observability harder.
We aim to present our observability journey, particularly with Prometheus.
This will cover our architectural choices in the past, such as building tooling for managing Prometheus for on-demand Kubernetes clusters, our current usage and drawbacks we’d like to address, and our plans for the future, such as horizontal scaling and Cortex.
We will also cover our continuous improvement process using post mortems and continuous delivery, which allows us to evolve our metrics, new exporters, and alerting as we discover blind spots.
This talk presents our learnings of handling observability at scale, with in-depth examples from our infrastructure.
Container Registries store the images we build, secure, sign, geo-replicate and deploy. They support production workloads we configure authentication for each service and user that must access them.
Joins as we share the work to extend OCI distribution and image specs to support new artifact types such as Helm and CNAB. We'll demonstrate how you can author and store new artifacts, leveraging the investments of OCI compliant registries enabling you to focus on the apps and artifacts, without having to manage the infrastructure.
Relative to cloud infrastructure, bare metal environments are more varied and do not expose a unified API. The Cluster API unifies infrastructure management with Kubernetes-native resources, but our previous attempts to use it in bare metal environments show that actuators alone have significant limitations. Kubernetes webhooks offer an alternative that separates the provisioning of Infrastructure from the deployment of Kubernetes, while keeping the declarative model and common tooling provided by the Cluster API.
In this talk, we will explore our journey scaling from a couple of hundred nodes to several thousand Kubernetes nodes. Tales will be told on how to scale etcd itself and what a health check every 30 seconds does to the apiserver when running at several thousand nodes.
Doors and registration desk open to receive attendees.
Why should I use it?! Introducing new solutions, new technologies or new processes can meet resistance among your developers. Especially when it is a hyped technology such as Kubernetes.
Listen to what Jessica and her team learned while building Kubernetes as a Service for developers at Meltwater. How human challenges become technical choices, and how communication and education can achieve high adoption rate.
If you're developing applications on top of Kubernetes, you may be feeling overwhelmed with the vast number of developer tooling in the ecosystem at your disposal. Kubernetes is moving at a rapid pace, and it's becoming impossible to keep up with the latest and greatest development environments, debuggers, and build, test and deployment tools.
In this talk, we'll share from our experience building applications on top of Kubernetes at Bitnami. We'll take a look at the landscape and answer questions like "should my team be using minikube or a shared Kubernetes environment?", "what's the difference between Skaffold, Draft and Telepresence?" and "should I be building an operator or a Helm chart?". We'll try to discern which tools best fit a scenario or workflow by looking at real-world examples of Kubernetes applications.
While running Kubernetes in production, how do you know what the cluster is doing? In this talk Federico will show you how he and his team are using kube-state-metrics in combination with other exporters and logs to get insights into the multi-tenant Kubernetes cluster they run for 40+ development teams at Meltwater. He will focus on metrics for the higher level Kubernetes objects as well as the cloud environment they run the cluster in.
In a world where stateless applications are optimized to run blazing fast, message exchanging cannot be allowed to affect their performance. Having the ability to publish more than 7 million of messages per second, NATS is the sprinter of the messaging queues.
Whereas benchmarks are good indicators for choosing a tool there is no way to confirm its value without monitoring its performance in production. In our team, we use EFK stack to monitor a bunch of microservices running on top of Kubernetes, since EFK tends to be the de facto way to monitor containerized microservices.
Everything started with a task: Ship NATS monitoring data to Elasticsearch. What we achieved: Extending Beats, the Elastic data shipper, with a NATS dedicated module.
Join us in this session to learn more about the journey, how to add value to a CNCF project and give back to the community.
PS. There will be a demo!
In late 2017 and throughout 2018 we witnessed the advent of a new class of CPU-level information disclosure vulnerabilities, commonly known as “Spectre”, “Meltdown”, and (later in 2018) “Level 1 Terminal Fault” (l1tf in short, also known as “Foreshadow”).
This talk will give a brief introduction of related CPU design concepts and their concrete exploitation by the above-mentioned vulnerabilities, and discuss available mitigations.
After we’ve established (or refreshed) our knowledge of the problem field, the main part of the talk will focus on keeping your Kubernetes clusters secured from those vulnerabilities: we will take a full-stack approach and look at common OS and container abstraction layers in cloud-native scenarios individually - bare metal kernel space, user space, (optional) virtualization, and container runtime - to discuss weaknesses and mitigations at each of the layers.
Test Driven Development and Code Coverage as a concept and practice is approaching 20 years and we've moved on. Sufficiently advanced monitoring is indistinguishable from testing and in this talk I'll prove it.
What even are Kubernetes Operators? Kernel modules for extending Kubernetes.
In this presentation we will explain why and how container networking is moving from the kernel into user space through projects such as the Contiv-VPP CNI plug-in (which leverages the Linux Foundation's fd.io project).
As hosted Kubernetes solutions mature, it becomes ever more compelling to operate clusters across multiple cloud providers. A general point of friction can often be the differences in how you are able to authenticate to those clusters. Cloud providers tend to integrate their own proprietary solutions and hosted control planes lack the flexibility to use authentication providers and audit sinks.
Istio is one of the most important things to happen to continuous delivery/deployment since Kubernetes. In this talk, you'll learn how to leverage Helm and Istio to create reliable automated deployment. To help us visualize the rollout we've built an interactive, open-source app and will ask audiences to help decide if a rollout continues or not.
Cerebral is an open source, provider agnostic, preemptive Kubernetes cluster autoscaler with pluggable metrics backends and scaling engines. In this talk, we'll do a deep dive into Cerebral and contrast its methodology with that of the Kubernetes Cluster Autoscaler, which scales only after seeing that pods cannot be scheduled.
The Cloud Native community pushes updates very frequently, sometimes for security reasons other times to deliver new features. This means we need to continuously upgrade Kubernetes and the related Cloud Native Stack.
Kubernetes has created a nice way to roll out new application versions, but what about the upgrade of the orchestrator itself, the core addons running on the cluster, or even the infrastructure holding it.
Thanks to running clusters for multiple customers all over the world Giant Swarm has created a system to roll out any piece of the infrastructure stack without impact to the customer’s workloads. Our approach is relying on a set of operators to gracefully control the entire process making it reliable and reproducible.
The audience will see the different decisions that have been taken and which problems have been faced over time.
The State of the Art of OpenMetrics and some fundamentals about it.
Not sure which service mesh is right for you? In the emerging landscape of service meshes, which should you choose? In this lightning talk, we will demo, Meshery, an open source, multi-mesh playground that deploys different types of service meshes on-demand.
The Kubernetes project releases a new version every 3 month as well as several bug fix releases in between. You need and want to upgrade your clusters. How do you do that with zero-downtime and no impact on your production workloads? In this lightning talk I will show how my team has come up with a procedure to upgrade a cluster and monitor the upgrade itself. In particular to avoid impact due to nodes becoming "Not Ready".
Learn how to use Porter to create and deploy Cloud Native Application Bundles without knowing the CNAB spec.