2019-11-17, 10:25–10:55, The Theater
Benchmarking system performance in a repeatable, reproducible way can be a difficult task, both technologically as well as philosophically - doubly so for as complex a system as a service mesh. However, the cost of adding new technology to a stack can be critical in making a decision about adoption - and repeated, reproducible benchmarks can help the service mesh communities to meet, and improve on, their respective quality bar. This talk benefits the ecosystem by not only characterizing the cost of various service mesh implementations in numeric terms, but also by describing what “cost” means in this context, and by introducing an open source framework for running these tests that can be used by anyone in the world to reproduce results.
Service meshes, a category of SDNs that provide application centric connectivity, routing, introspection, and debugging, have risen to prominence over the recent years, spawning multiple successful projects with different motivations and goals. But what's the cost of adopting a solution to existing networking infrastructure?
This talk answers this question in a quantitative way. We discuss how to characterize the cost of running a service mesh in terms of both tail latency and resource consumption. We describe how to perform a realistic benchmark, including how to eliminate sources of bias and variance in the measurement process. We present our open source framework for reproducibly benchmarking service meshes. Finally, we report the results of benchmarking Linkerd and Istio under a variety of real-world conditions, and quantify these two technologies by their operational cost.
Please find the talk slides here: https://drive.google.com/file/d/1W-wfCfEwzPHTnPD0nfHT9eW5v6xwCAt3/view