Set up Kubernetes for all your machine learning workflows

The goal of data science teams are to build and deploy high impact models. Data scientists prefer to focus on building algorithms, while data engineers focus on performance and productionizing machine learning.  Kubernetes is an orchestration platform that can be deployed anywhere and can serve any kind of machine and deep learning environment. Kubernetes is a great tool for data scientists to use to stay productive and for data engineers to get production-ready results.  In this free workshop you’ll learn how to build your own Kubernetes to use in your next machine learning pipeline.

Join CTO of Leah Kolben on July 31st @ 12pm EST in a live workshop.  Leah will walk you through each step to set up your Kubernetes cluster, so you can run Spark, TensorFlow, and any ML framework instantly. She’ll touch on the entire machine learning pipeline from model training to model deployment. As a bonus, you will also get pre-configured YAML files to launch your own end-to-end machine learning on a Kubernetes cluster.


From this workshop you’ll learn to: 

  • Create a Kubernetes cluster on AWS
  • Connect your local development machine to the cluster
  • Run any machine learning models (Spark, TensorFlow, and more) on your cluster
  • Managing different environments for your Kubernetes cluster (Deep Learning and Big Data Analytics on the same cluster)
  • Scale a Kubernetes cluster