Cloud Native Rejekts NA (Los Angeles + Hybrid) 2021
This talk will be about the importance of tracking dependencies in a large project like Kubernetes and about "depstat" which is a tool created to track dependency updates to the Kubernetes codebase. The Kubernetes repository receives many pull requests each day many of which bring dependency changes with them. Most of the time, the maintainers manually have to spot these changes, determine their effects on the overall dependency tree, and then ping the pull request authors to take action. To avoid this and help better track the dependency updates, depstat was created. depstat is an upstream project which analyzes dependencies for go modules enabled projects. It currently runs as part of a prow job in the Kubernetes code repository and provides four crucial dependency-related metrics. "depstat" also provides the ability to analyze dependencies visually by creating a graph.
We built Inspektor Gadget to enable Kubernetes operators to get insights on their clusters, simplifying running bcc tools and using the power of eBPF to build more advanced gadgets. We designed Headlamp to help operators visualize and control their clusters, and we made it extensible so that users could tailor it to their needs. We are now combining them to get a lot more power out of both, by enabling use-cases that are powered by Inspektor Gadget and delivered in a way that is easy to use.
Nico will show how to hijack a Kubernetes cluster based on common attack vectors. You'll also learn why it's important to implement zero-trust to prevent data leaks and malicious workloads from being executed on a hijacked cluster.
Furthermore, he will show you how to protect your cluster from being taken over by sharing useful insights, configurations, and toolsets.
This talk is not intended to be an in-depth security talk, but to provide you with best practices and also make you aware of certain attack vectors and how to prevent them.
This session is going to present a full cloud-native observability stack that can be used to get monitoring data from a non-orchestrated micro-services architecture oriented application.
The application will use HTTP and gRPC for communication, but will not include any code to aid with its monitoring. Then, to get as much data about the application as possible, a set of open source tools like Nginx Ingress Controller, Prometheus, Jaeger, Loki, Linkerd 2, and Grafana will be used. Attendees can see how much telemetry one can get using these tools: metrics, logs and even traces will be presented - all of that without changing a line of code in the application. Obviously, the end result won’t be as good as an app with orchestrated code - there’s no free lunch. But let’s see if one can get a sandwich on-the-house.
Ray is an increasingly popular distributed execution framework for scaling applications and leveraging state of the art machine learning libraries. With the availability of GPU compute shapes on public clouds, deploying Ray on the public cloud is an attractive option over deploying it on bespoke on-prem compute resources. This talk explores suitability of Kubernetes on public cloud as a deployment platform for Ray, shares experiments with Ray deployed on Nodeless Kubernetes, lessons learned, and suggested best practices.
We all know that Kubernetes is complex and provides an ever-increasing set of capabilities for managing your workloads, but the basics of it like how pods work, how you expose a service, and so on are pretty simple... or are they? Even basic operations occasionally have sharp edges that can give you nasty papercuts deploying your apps because what you might intuitively assume to be the case, isn't how Kubernetes actually works. Joe Thompson takes you through some of these "you'd think that, but..." cases, details how Kubernetes "thinks" about them, and offers advice on how you can mitigate or change Kubernetes' behavior so that those papercuts don't become grievous wounds in your workloads or the clusters running them.
More than 4 years ago, the Ambassador Labs team recognized the power of Envoy Proxy and started building the Ambassador API gateway. Back then they thought of Ambassador as the human way to harness the power of Envoy Proxy, which was mostly built for machine configuration. Over the last four years that project has evolved to become what is now known as Emissary Ingress, the CNCF Incubation project, an accomplishment owed largely to the super community that has rallied around the project over the years.
In this talk Cindy will discuss why community is so important, and explain why they wanted to double down on their community efforts by donating the project to the CNCF. She’ll share some of the lessons learned in growing the community from zero to thousands of active users and talk about the tactics they use to build an inclusive community that shares knowledge and fosters collaboration across all levels of experience.
Helm is designed to take the guesswork (and a lot of the YAML) out of deploying a Kubernetes application. If you’ve ever needed to rapidly deploy a durable application, you’ve probably taken a long look at a Helm chart. And if you’re already using Helm, you’ve probably considered about how Infrastructure as Code (IaC) can simplify your workflow.
But how does Helm’s configuration interact with IaC state? What configuration is really authoritative—and with what scope? If you update an application that you Terraformed from a Helm chart, is your persistent data safe? And what about existing Helm-deployed applications—is it safe to import them into Terraform?
Clearly, Helm is incredibly powerful, but it’s also an abstraction, and no one has written the manual on how to integrate it with IaC. This talk will explain how Helm and Terraform work together under the hood, as well as exploring patterns to help Terraform practitioners safely engage with Helm charts and adopt Helm resources.
Pod Security Policies are cluster-wide resources that control security sensitive aspects of pod specification, defining a set of conditions that a pod must run with in order to be accepted into the system.
Due to its limitations, recently the Kubernetes Auth Special Interest Group (AKA sig-auth) announced to deprecate the pod security policy (PSP) in Kubernetes next versions. This decision could leave many Kubernetes users at risk of being exposed to various exploits. Adversaries may utilise the lack of such policy to run privileged pods, create pods on host namespaces or networks and much more. One of the best alternatives for Kubernetes users to mitigate PSP deprecation through the built-in admission controller is utilising Open Policy Agent (OPA) rules.
OPA is a CNCF Incubating project and supports enforcing policies in a variety of areas, including microservices, Kubernetes, CI/CD pipeline, service mesh and API gateways. Relying on a declarative policy language called Rego, security operators can concisely express policy-rules and implement them in your environment. Moving from PSP to OPA, users need an easy way to translate their policy or automatically generate new OPA rules based on the rules already deployed in the environment. With this need in mind, we integrated the existing project kube-psp-advisor, already capable of creating K8s Pod Security Policies (PSPs) from either a live environment or from a single .yaml file containing a pod specification, adding a new feature to automatically generate OPA Rules, scanning an existing security context from Kubernetes resources in live environments or directly from a .yaml file.
In this talk we go through the main limitation in PSP to explain how OPA is a better overall solution, capable of providing security controls to all the components in your Kubernetes environment. We show how the kube-policy-advisor tool works and how it’s possible to generate an OPA Rule from a live environment. We proceed then to enforce the rule using Kubernetes Admission Controller in the environment and see how OPA with the new rule in place works when we try to deploy a non compliant pod inside the existing environment.
When something to do with your CRDs breaks, how many kubectl commands do you need to understand what’s happened?
This talk argues that “how many kubectls?” is a key metric that codifies the usability of any set of CRDs, and suggests some guidelines for CRD authors to manage this metric and improve the UX of their CRDs.
Just as Java beget a litany of J-titled projects and Kubernetes gave us everything from KOPS to K3S, the emerging Wasm landscape is littered with W names. Wasm Fever has officially begun! And we at DeisLabs are releasing many Wasm-related projects.
While it might seem like we're peppering the landscape with arbitrary tools and projects, we have a bigger vision in mind--one focused on building a community and an ecosystem that enables a new generation of portable and secure applications.
In this talk, we articulate our ambitions for the WebAssembly community as a whole, and show how our open source projects fit in.
Servers are everywhere, your VMs, containers, functions and even "serverless" code lives, breathes and runs on servers. Unfortunately even today managing the lifecycle of a bare metal server still requires antiquated tooling, scripts and is largely difficult to manage and operate. The tinkerbell project was born out of the Bare Metal as a service company Packet (now Equinix Metal) with the simple goal of modernising the task of maintaining the bare metal lifecycle. This talk will go under the hood of tinkerbell to detail the technologies that power it, some of the functionality that enables bare metal servers to now deploy in seconds and finally how we're extending tinkerbell in the cloud native space through Cluster-API.