Skip to content

Commit

Permalink
Update v2pipelines and UI (#28)
Browse files Browse the repository at this point in the history
* update-v2pipelines-and-ui

* updates after rc1 test

* minor update

* peer review edits
  • Loading branch information
MelissaFlinn committed Apr 29, 2024
1 parent 9cfb146 commit 400f209
Show file tree
Hide file tree
Showing 36 changed files with 40 additions and 29 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions workshop/docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@
* 3. Model Serving
** xref:preparing-a-model-for-deployment.adoc[1. Saving the Model]
** xref:deploying-a-model.adoc[2. Deploy the Model]
*** xref:deploying-a-model-multi-model-server.adoc[1. Deploying on a multi-model server]
*** xref:deploying-a-model-single-model-server.adoc[2. Deploying on a single-model server]
*** xref:deploying-a-model-single-model-server.adoc[1. Deploying on a single-model server]
*** xref:deploying-a-model-multi-model-server.adoc[2. Deploying on a multi-model server]
** xref:testing-the-model-api.adoc[3. Using the API]
* 4. Data Science Pipelines
Expand Down
3 changes: 2 additions & 1 deletion workshop/docs/modules/ROOT/pages/_attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@
:deliverable: workshop
//:deliverable: tutorial
:productname-long: Red Hat OpenShift AI
:productname-short: OpenShift AI
:productname-short: OpenShift AI
:version: 2.9
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You've created a blank pipeline!

. Set the default runtime image for when you run your notebook or Python code.

.. In the pipeline editor, click *Open Panel*
.. In the pipeline editor, click *Open Panel*.
+
image::pipelines/wb-pipeline-panel-button-loc.png[Open Panel]

Expand Down Expand Up @@ -88,6 +88,8 @@ In node 1, the notebook creates the `models/fraud/1/model.onnx` file. In node 2,
image::pipelines/wb-pipeline-node-1-file-output-form.png[Set file dependency value, 400]

. Repeat steps 1-3 for node 2.

. Save the pipeline.

== Configure the data connection to the S3 storage bucket

Expand Down Expand Up @@ -122,7 +124,7 @@ You must set the secret name and key for each of these fields.

.. Select node 2, and then select the *Node Properties* tab.
+
Under *Additional Properties*, note that some environment variables have been pre-filled. The pipeline editor inferred that you'd need them from the notebook code.
Under *Additional Properties*, note that some environment variables have been pre-filled. The pipeline editor inferred that you need them from the notebook code.
+
Since you don't want to save the value in your pipelines, remove all of these environment variables.

Expand All @@ -143,7 +145,7 @@ image::pipelines/wb-pipeline-add-kube-secret.png[Add Kube Secret]
+
image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 400]

.. Repeat Steps 3a and 3b for each set of these Kubernetes secrets:
.. Repeat Steps 2a and 2b for each set of these Kubernetes secrets:

* *Environment Variable*: `AWS_SECRET_ACCESS_KEY`
** *Secret Name*: `aws-connection-my-storage`
Expand Down Expand Up @@ -171,7 +173,8 @@ Upload the pipeline on your cluster and run it. You can do so directly from the

. Click the play button in the toolbar of the pipeline editor.
+
image::pipelines/wb-pipeline-run-button.png[Pipeline Run Button]
image::pipelines/wb-pipeline-run-button.png[Pipeline Run Button, 300]


. Enter a name for your pipeline.
. Verify the *Runtime Configuration:* is set to `Data Science Pipeline`.
Expand Down
2 changes: 1 addition & 1 deletion workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ image::workbenches/ds-project-workbench-list.png[Workbench list]

NOTE: If you made a mistake, you can edit the workbench to make changes.

image::workbenches/ds-project-workbench-list-edit.png[Workbench list edit]
image::workbenches/ds-project-workbench-list-edit.png[Workbench list edit, 300]


.Next step
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,15 +38,21 @@ You must wait until the pipeline configuration is complete before you continue a

.Verification

Check the *Pipelines* tab for the project. Pipelines are enabled when the *Configure pipeline server* button no longer appears.
. Navigate to the *Pipelines* tab for the project.
. Next to *Import pipeline*, click the action menu (⋮) and then select *View pipeline server configuration*.
+
image::projects/ds-project-pipeline-server-view.png[View pipeline server configuration menu, 300]
+
An information box opens and displays the object storage connection information for the pipeline server.

image::projects/ds-project-create-pipeline-server-complete.png[Create pipeline server complete]

[NOTE]
====
If you have waited more than 5 minutes and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 300]
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server]
You can also ask your {productname-short} administrator to verify that self-signed certificates are added to your cluster as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/{version}/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs[Working with certificates].
====

.Next step
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This file-browser window shows the files and folders that are saved inside your

.. On the toolbar, click the *Git Clone* icon:
+
image::workbenches/jupyter-git-icon.png[Git Clone icon, 200]
image::workbenches/jupyter-git-icon.png[Git Clone icon, 300]

.. Enter the following {deliverable} Git *https* URL:
+
Expand All @@ -32,21 +32,21 @@ image::workbenches/jupyter-git-icon.png[Git Clone icon, 200]
https://github.com/rh-aiservices-bu/fraud-detection.git
----
+
image::workbenches/jupyter-git-modal.png[Git Modal]
image::workbenches/jupyter-git-modal.png[Git Modal, 300]

.. Check the *Include submodules* option.

.. Click *CLONE*.
.. Click *Clone*.

.Verification

Double-click the newly-created folder, `fraud-detection`:

image::workbenches/jupyter-file-browser.png[Jupyter file browser]
image::workbenches/jupyter-file-browser.png[Jupyter file browser, 300]

In the file browser, you should see the notebooks that you cloned from Git.

image::workbenches/jupyter-file-browser-2.png[Jupyter file browser - fraud-detection]
image::workbenches/jupyter-file-browser-2.png[Jupyter file browser - fraud-detection, 300]


.Next step
Expand Down
7 changes: 5 additions & 2 deletions workshop/docs/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,11 @@ Based on this data, the model outputs the likelihood of the transaction being fr

== Before you begin

https://developers.redhat.com/products/red-hat-openshift-ai/download[Set up your {productname-long} environment]
If you don't already have an instance of {productname-long}, see the https://developers.redhat.com/products/red-hat-openshift-ai/download[{productname-long} page on the Red Hat Developer website] for information on setting up your environment. There, you can create an account and access the *free {productname-short} Sandbox* or you can learn how to install {productname-short} on *your own OpenShift cluster*.

If you don't already have an instance of {productname-long}, find out more on the https://developers.redhat.com/products/red-hat-openshift-ai/download[developer page]. There, you can create an account and access the *free {productname-short} Sandbox* or you can learn how to install {productname-short} on *your own OpenShift cluster*.
[IMPORTANT]
====
If your cluster uses self-signed certificates, before you begin the {deliverable}, your {productname-short} administrator must add self-signed certificates for {productname-short} as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/{version}/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs[Working with certificates].
====

If you're ready, xref:navigating-to-the-dashboard.adoc[start the {deliverable}!]
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[id='running-a-pipeline-generated-from-python-code']
= Running a data science pipeline generated from Python code

In the previous section, you created a simple pipeline by using the GUI pipeline editor. It's often desirable to create pipelines by using code that can be version-controlled and shared with others. The https://github.com/kubeflow/kfp-tekton[kfp-tekton] SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the `pip install kfp-tekton~=1.5.9` command. With this package, you can use Python code to create a pipeline and then compile it to Tekton YAML. Then you can import the YAML code into {productname-short}.
In the previous section, you created a simple pipeline by using the GUI pipeline editor. It's often desirable to create pipelines by using code that can be version-controlled and shared with others. The https://github.com/kubeflow/pipelines[kfp] SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the `pip install kfp` command. With this package, you can use Python code to create a pipeline and then compile it to YAML format. Then you can import the YAML code into {productname-short}.

This {deliverable} does not delve into the details of how to use the SDK. Instead, it provides the files for you to view and upload.

Expand Down Expand Up @@ -31,7 +31,7 @@ image::pipelines/dsp-pipline-import-upload.png[]
+
The pipeline shows in the list of pipelines.

. Expand the pipeline item and then click *View runs*.
. Expand the pipeline item, click the action menu (⋮), and then select *View runs*.
+
image::pipelines/dsp-pipline-view-runs.png[]

Expand All @@ -47,8 +47,6 @@ image::pipelines/pipeline-create-run-form.png[Create Pipeline Run form]

. Click *Create* to create the run.
+
A new run starts immediately and opens the run details page.
A new run starts immediately. The *Details* page shows a pipeline created in Python that is running in {productname-short}.
+
image::pipelines/pipeline-run-in-progress.png[]

There you have it: a pipeline created in Python that is running in {productname-short}.
image::pipelines/pipeline-run-in-progress.png[]
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You must know the OpenShift resource name for your data science project so that

In the {productname-short} dashboard, select *Data Science Projects* and then click the *?* icon next to the project name. A text box appears with information about the project, including its resource name:

image::projects/ds-project-list-resource-hover.png[Project list resource name]
image::projects/ds-project-list-resource-hover.png[Project list resource name, 400]


[NOTE]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You can run a code cell from the notebook interface or from the keyboard:
+
image::workbenches/run_button.png[Jupyter Run]

* *From the keyboard:* Press `CTRL`+`ENTER` to run a cell or press `SHIFT`+`ENTER` to run the cell and automatically select the next one.
* *From the keyboard:* Press `CTRL` + `ENTER` to run a cell or press `SHIFT` + `ENTER` to run the cell and automatically select the next one.

After you run a cell, you can see the result of its code as well as information about when the cell was run, as shown in this example:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ Note that you can start a Jupyter notebook from here, but it would be a one-off
+
image::projects/dashboard-click-projects.png[Data Science Projects List]
+
If you already have an active project that you'd like to use, select it now and skip ahead to the next section, xref:storing-data-with-data-connections.adoc[Storing data with data connections]. Otherwise, continue to the next step.
If you already have an active project that want to use, select it now and skip ahead to the next section, xref:storing-data-with-data-connections.adoc[Storing data with data connections]. Otherwise, continue to the next step.

. Click *Create data science project*.

. Enter a display name and description. Based on the display name, a resource name is automatically generated, but you can change it if you'd like.
. Enter a display name and description. Based on the display name, a resource name is automatically generated, but you can change if you prefer.
+
image::projects/ds-project-new-form.png[New data science project form]

Expand Down

0 comments on commit 400f209

Please sign in to comment.