»Building auto DevOps for production grade databases on Kubernetes«
2019-11-16, 16:20–16:50, Room1
On production databases, the data patterns keep changing as the time progresses. CI pipelines need to have access to the latest data or closer to the latest data for effective testing. As Enterprises and FinTechs start to use Kubernetes and microservices-based architecture, their DevOps teams would like to solve two challenges that are well known in the pre-Kubernetes era. The first challenge is to automate the data lifecycle between production and testing. The second challenge is to give developers instantaneous access to the failed environment when the CI pipeline fails. Solving these two challenges in the Kubernetes space will make the DevOps more productive in enterprises.
In this practical tutorial of auto DevOps, we demonstrate the construction of GitLab based CI pipeline and show the best practices for the data life cycle management through the use of cloud-native technologies.
Cloud-native developers and DevOps admins who are using Kubernetes and using any stateful databases. The audience will see the live construction of a GitLab based CI pipeline for large scale stateful application that is in live production. They are expected to gain insights into the best practices around constructing CI pipelines, data life cycle management and techniques around copy data management for live production databases. DevOps admins will gain insights into best practices of injecting chaos and assessing the resiliency of Kubernetes deployments in production. We will demonstrate the end-to-end procedure of building a chaos experiment, injecting both in pipelines (CI) as well as in production (CD) and show the resiliency assessment techniques. The DevOps audience will get the first-hand experience around best practices of auto DevOps and cloud-native chaos engineering.