05-14, 12:15–12:45 (Europe/Madrid), Main Room
Deployment of Machine Learning (ML) to production is notoriously difficult, made so by variations in models, engines, platforms, and networks. How can we deploy distributed ML in production across dissimilar devices from edge to cloud, make optimal use of available resources, and support practical considerations like blue/green testing, privacy preservation, and live updates?
In this talk, learn how to meet these challenges with wasmCloud, the distributed WebAssembly platform for portable business logic. Discover how you can make use of the open source machine learning capability provider with the open WASI-NN api to deploy a common code base, for use with inference engines like Tensorflow or ONNX, on embedded devices, LAN workstations, and the cloud. We will discuss how inference models can be dynamically and securely updated in the field, and discuss design decisions that have a direct impact on privacy, latency, throughput, and model accuracy.
Benefits to the Ecosystem
wasmCloud, a CNCF sandbox project, aims to make it easier for developers to build enterprise scale applications by separating business logic from service and platform concerns using well-defined capability contracts. The Machine Learning Inference Engine was created by collaborators from BMW, Intel, Cosmonic, and other industry leaders, to combine the portability and security benefits of WebAssembly with an api that is independent of the respective model, execution environment, and locality. This combination allows developers to concentrate on business logic and model selection, while deferring until runtime decisions that depend on the runtime environment, such as selection of engine based on performance and availability. We believe this loose coupling will benefit all participants, by accelerating independent development of inference engines and the applications that use them.
Christoph Brewing, from BMW, will explore Deployment of Machine Learning (ML) to production strategies.