Skip to content

Latest commit

 

History

History
101 lines (64 loc) · 4.63 KB

README.md

File metadata and controls

101 lines (64 loc) · 4.63 KB

Elyra AIDevSecOps Tutorial

This tutorial is used to discuss the interface between Data Science and DevOps. It looks to highlight that data scientists are not so different from developers, therefore they need to know Git and follow best practices to maintain their dependencies and code, add tests and make release. All these tasks can be supported through pipelines and bots so that data scientists can focus on the main problem to solve. In other words in this tutorial you will learn how the ML lifecycle, practices and tools can be enhanced by DevSecOps techniques.

What you will learn with this tutorial?

At the end of this tutorial you will be able to spawn images from JupyterHub, manage dependencies for Jupyter Notebook using Project Thoth extension for dependency management on JupyterLab, learn about overlays concept, setup AICoE CI and Kebechet Bot to automate creation of images for overlays and maintenance of software stacks. Then you will learn how to create and run an Elyra AI pipeline with Kubeflow Pipelines using the images created. Finally, you will learn how to leverage ArgoCD and deploy AI model automatically.

The demo application selected for this tutorial is the MNIST Classification. The MNIST Dataset is described here. In this tutorial there are different variations:

  • one using TensorFlow
  • one using Pytorch and Neural Magic tools.

Where you will run this tutorial?

Operate First is an open infrastructure environment started at Red Hat's Office of the CTO. It has been selected to run this tutorial since it is an open source initiative that fulfills all the requirements stated above. Anyone with a Google account can log in and start developing. To learn more about Operate First, visit the website or GitHub community.

Operate First hosts Open Data Hub with all the tools provided for Data Science projects (e.g. JupyterHub, Elyra, Kubeflow Pipelines, Seldon, Prometheus, Grafana, Superset) running on Red Hat Openshift.

Why does the tutorial repository have this structure?

The project template used can be found here: project template. It shows correlation between a data scientist needs (e.g. data, notebooks, models) and that of an AI DevOps engineer (e.g. manifests). Having structure in a project ensures all the pieces required for the ML and DevOps lifecycles are present and easily discoverable.

Tutorial Steps

  1. Pre-requisities

ML Lifecycle/Source Lifecycle

  1. Setup your initial environment

  2. Explore notebooks and manage dependencies

  3. Push changes to GitHub

  4. Setup bots and pipelines to create releases, build images and enable dependency management

  5. Create an AI Pipeline

  6. Run and debug AI Pipeline

DevOps Lifecycle

  1. Deploy Inference Application

  2. Test Deployed inference application

  3. Monitor your inference application deployed

Workshops

Here you find a list of conferences where this tutorial has been used:

References