Skip to content

Commit

Permalink
Release v0.6.0 doc validation (#1271)
Browse files Browse the repository at this point in the history
Release v0.6.0 doc validation
  • Loading branch information
kevin85421 committed Jul 26, 2023
1 parent a163a2e commit e4e8727
Show file tree
Hide file tree
Showing 35 changed files with 222 additions and 949 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/actions/configuration/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ runs:
env:
GITHUB_ACTIONS: true
RAY_IMAGE: rayproject/ray:${{ inputs.ray_version }}
OPERATOR_IMAGE: kuberay/operator:v0.5.0 # The operator image in the latest KubeRay release.
OPERATOR_IMAGE: kuberay/operator:v0.6.0 # The operator image in the latest KubeRay release.
run: |
python tests/test_sample_raycluster_yamls.py
shell: bash
Expand All @@ -83,7 +83,7 @@ runs:
env:
GITHUB_ACTIONS: true
RAY_IMAGE: rayproject/ray:${{ inputs.ray_version }}
OPERATOR_IMAGE: kuberay/operator:v0.5.0 # The operator image in the latest KubeRay release.
OPERATOR_IMAGE: kuberay/operator:v0.6.0 # The operator image in the latest KubeRay release.
run: |
python tests/test_sample_rayjob_yamls.py
shell: bash
2 changes: 1 addition & 1 deletion .github/workflows/image-release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on:
description: 'Commit reference (branch or SHA) from which to build the images.'
required: true
tag:
description: 'Desired release version tag (e.g. v0.5.0-rc.0).'
description: 'Desired release version tag (e.g. v0.6.0-rc.0).'
required: true

jobs:
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ We also recommend checking out the official Ray guides for deploying on Kubernet
## Quick Start

* Try this [end-to-end example](helm-chart/ray-cluster/README.md)!
* Please choose the version you would like to install. The examples below use the latest stable version `v0.5.0`.
* Please choose the version you would like to install. The examples below use the latest stable version `v0.6.0`.

| Version | Stable | Suggested Kubernetes Version |
|----------|:-------:|------------------------------:|
| master | N | v1.19 - v1.25 |
| v0.5.0 | Y | v1.19 - v1.25 |
| v0.6.0 | Y | v1.19 - v1.25 |

### Use YAML

Expand All @@ -59,19 +59,19 @@ Once you have connected to a Kubernetes cluster, run the following commands to d

```sh
# case 1: kubectl >= v1.22.0
export KUBERAY_VERSION=v0.5.0
export KUBERAY_VERSION=v0.6.0
kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=${KUBERAY_VERSION}&timeout=90s"

# case 2: kubectl < v1.22.0
# Clone KubeRay repository and checkout to the desired branch e.g. `release-0.5`.
# Clone KubeRay repository and checkout to the desired branch e.g. `release-0.6`.
kubectl create -k ray-operator/config/default
```

To deploy both the KubeRay Operator and the optional KubeRay API Server run the following commands.

```sh
# case 1: kubectl >= v1.22.0
export KUBERAY_VERSION=v0.5.0
export KUBERAY_VERSION=v0.6.0
kubectl create -k "github.com/ray-project/kuberay/manifests/cluster-scope-resources?ref=${KUBERAY_VERSION}&timeout=90s"
kubectl apply -k "github.com/ray-project/kuberay/manifests/base?ref=${KUBERAY_VERSION}&timeout=90s"

Expand All @@ -92,8 +92,8 @@ Please read [kuberay-operator](helm-chart/kuberay-operator/README.md) to deploy
```sh
helm repo add kuberay https://ray-project.github.io/kuberay-helm/

# Install both CRDs and KubeRay operator v0.5.0.
helm install kuberay-operator kuberay/kuberay-operator --version 0.5.0
# Install both CRDs and KubeRay operator v0.6.0.
helm install kuberay-operator kuberay/kuberay-operator --version 0.6.0

# Check the KubeRay operator Pod in `default` namespace
kubectl get pods
Expand Down
8 changes: 4 additions & 4 deletions apiserver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ helm version
```sh
helm repo add kuberay https://ray-project.github.io/kuberay-helm/

# Install KubeRay APIServer v0.5.0.
helm install kuberay-apiserver kuberay/kuberay-apiserver --version 0.5.0
# Install KubeRay APIServer v0.6.0.
helm install kuberay-apiserver kuberay/kuberay-apiserver --version 0.6.0

# Check the KubeRay APIServer Pod in `default` namespace
kubectl get pods
Expand All @@ -59,8 +59,8 @@ To list the `my-release` deployment:

```sh
helm ls
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# kuberay-apiserver default 1 2023-02-07 09:28:15.510869781 -0500 EST deployedkuberay-apiserver-0.5.0
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# kuberay-apiserver default 1 2023-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx deployed kuberay-apiserver-0.6.0
```

### Uninstall the Chart
Expand Down
2 changes: 1 addition & 1 deletion clients/python-client/python_client/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.5.0"
__version__ = "0.6.0"
2 changes: 1 addition & 1 deletion clients/python-client/setup.cfg
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[metadata]
name = python_client
version = 0.5.0
version = 0.6.0
author = Ali Kanso
description = A Kuberay python client library to create/delete/update clusters
long_description = file: README.md
Expand Down
2 changes: 1 addition & 1 deletion docs/deploy/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Find the Docker images for various KubeRay components on [Dockerhub](https://hub.docker.com/u/kuberay).

#### Stable versions
For stable releases, use version tags (e.g. `kuberay/operator:v0.5.0`).
For stable releases, use version tags (e.g. `kuberay/operator:v0.6.0`).

#### Master commits
The first seven characters of the git SHA specify images built from specific commits
Expand Down
8 changes: 4 additions & 4 deletions docs/deploy/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ helm install kuberay-operator kuberay/kuberay-operator
#### Method 2: Kustomize
```sh
# Install CRDs
kubectl create -k "github.com/ray-project/kuberay/manifests/cluster-scope-resources?ref=v0.5.0&timeout=90s"
kubectl create -k "github.com/ray-project/kuberay/manifests/cluster-scope-resources?ref=v0.6.0&timeout=90s"

# Install KubeRay operator
kubectl apply -k "github.com/ray-project/kuberay/manifests/base?ref=v0.5.0"
kubectl apply -k "github.com/ray-project/kuberay/manifests/base?ref=v0.6.0"
```

> Observe that we must use `kubectl create` to install cluster-scoped resources.
Expand All @@ -43,8 +43,8 @@ Users can use the following commands to deploy KubeRay operator in a specific na
export KUBERAY_NAMESPACE=<my-awesome-namespace>

# Install CRDs (Executed by cluster admin)
kustomize build "github.com/ray-project/kuberay/manifests/overlays/single-namespace-resources?ref=v0.5.0" | envsubst | kubectl create -f -
kustomize build "github.com/ray-project/kuberay/manifests/overlays/single-namespace-resources?ref=v0.6.0" | envsubst | kubectl create -f -

# Install KubeRay operator (Executed by user)
kustomize build "github.com/ray-project/kuberay/manifests/overlays/single-namespace?ref=v0.5.0" | envsubst | kubectl apply -f -
kustomize build "github.com/ray-project/kuberay/manifests/overlays/single-namespace?ref=v0.6.0" | envsubst | kubectl apply -f -
```
7 changes: 1 addition & 6 deletions docs/guidance/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,7 @@ popd
# (2) Set the name of head pod service to `spec...backend.service.name`
eksctl get cluster ${YOUR_EKS_CLUSTER} # Check subnets on the EKS cluster

# Step 4: Create an ALB ingress. When an ingress with proper annotations creates,
# AWS Load Balancer controller will reconcile a ALB (not in AWS EKS cluster).
# For RayService, you can use ray-operator/config/samples/ray_v1alpha1_rayservice-alb-ingress.yaml
kubectl apply -f ray-operator/config/samples/alb-ingress.yaml

# Step 5: Check ingress created by Step 4.
# Step 4: Check ingress created by Step 4.
kubectl describe ingress ray-cluster-ingress

# [Example]
Expand Down
2 changes: 1 addition & 1 deletion docs/guidance/kubeflow-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ kustomize version --short
```sh
# Create a RayCluster CR, and the KubeRay operator will reconcile a Ray cluster
# with 1 head Pod and 1 worker Pod.
helm install raycluster kuberay/ray-cluster --version 0.5.0 --set image.tag=2.2.0-py38-cpu
helm install raycluster kuberay/ray-cluster --version 0.6.0 --set image.tag=2.2.0-py38-cpu

# Check RayCluster
kubectl get pod -l ray.io/cluster=raycluster-kuberay
Expand Down
4 changes: 2 additions & 2 deletions docs/guidance/mobilenet-rayservice.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ kind create cluster --image=kindest/node:v1.23.0

## Step 2: Install KubeRay operator

Follow [this document](../../helm-chart/kuberay-operator/README.md) to install the nightly KubeRay operator via
Helm. Note that the YAML file in Step 3 uses `serveConfigV2`, which is first supported by KubeRay v0.6.0.
Follow [this document](../../helm-chart/kuberay-operator/README.md) to install the latest stable KubeRay operator via Helm repository.
Please note that the YAML file in this example uses `serveConfigV2`, which is supported starting from KubeRay v0.6.0.

## Step 3: Install a RayService

Expand Down
3 changes: 2 additions & 1 deletion docs/guidance/prometheus-grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ kubectl get all -n prometheus-system
## Step 4: Install a RayCluster

```sh
helm install raycluster kuberay/ray-cluster --version 0.5.0
helm install raycluster kuberay/ray-cluster --version 0.6.0

# Check ${RAYCLUSTER_HEAD_POD}
kubectl get pod -l ray.io/node-type=head
Expand Down Expand Up @@ -271,6 +271,7 @@ kubectl port-forward --address 0.0.0.0 prometheus-prometheus-kube-prometheus-pro
- Go to `${YOUR_IP}:9090/targets` (e.g. `127.0.0.1:9090/targets`). You should be able to see:
- `podMonitor/prometheus-system/ray-workers-monitor/0 (1/1 up)`
- `serviceMonitor/prometheus-system/ray-head-monitor/0 (1/1 up)`

![Prometheus Web UI](../images/prometheus_web_ui.png)

- Go to `${YOUR_IP}:9090/graph`. You should be able to query:
Expand Down
2 changes: 1 addition & 1 deletion docs/guidance/rayjob.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
## Prerequisites

* Ray 1.10 or higher
* KubeRay v0.3.0+. (v0.5.0+ is recommended)
* KubeRay v0.3.0+. (v0.6.0+ is recommended)

## What is a RayJob?

Expand Down
5 changes: 2 additions & 3 deletions docs/guidance/rayservice.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,8 @@ kind create cluster --image=kindest/node:v1.23.0

## Step 2: Install the KubeRay operator

Follow [this document](https://github.com/ray-project/kuberay/blob/master/helm-chart/kuberay-operator/README.md) to install the nightly KubeRay operator via Helm.
Note that sample RayService in this guide uses `serveConfigV2` to specify a multi-application Serve config.
This will be first supported in Kuberay 0.6.0, and is currently supported only on the nightly KubeRay operator.
Follow [this document](../../helm-chart/kuberay-operator/README.md) to install the latest stable KubeRay operator via Helm repository.
Please note that the YAML file in this example uses `serveConfigV2` to specify a multi-application Serve config, which is supported starting from KubeRay v0.6.0.

## Step 3: Install a RayService

Expand Down
6 changes: 3 additions & 3 deletions docs/guidance/stable-diffusion-rayservice.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ and [the Ray documentation](https://docs.ray.io/en/latest/serve/tutorials/stable

Follow [aws-eks-gpu-cluster.md](./aws-eks-gpu-cluster.md) or [gcp-gke-gpu-cluster.md](./gcp-gke-gpu-cluster.md) to create a Kubernetes cluster with 1 CPU node and 1 GPU node.

## Step 2: Install the nightly KubeRay operator
## Step 2: Install KubeRay operator

Follow [this document](../../helm-chart/kuberay-operator/README.md) to install the **nightly** KubeRay operator via
Helm. We're installing the nightly release here since this example's YAML file uses `serveConfigV2`, which uses features that will be released in KubeRay v0.6.0.
Follow [this document](../../helm-chart/kuberay-operator/README.md) to install the latest stable KubeRay operator via Helm repository.
Please note that the YAML file in this example uses `serveConfigV2`, which is supported starting from KubeRay v0.6.0.

## Step 3: Install a RayService

Expand Down
2 changes: 1 addition & 1 deletion docs/guidance/tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ This [YouTube video](https://youtu.be/T4Df5_cojAs) is a good start.
your CA private key in a Kubernetes Secret in your production environment.

```sh
# Install v0.5.0 KubeRay operator
# Install v0.6.0 KubeRay operator
# `ray-cluster.tls.yaml` will cover from Step 1 to Step 3 (path: kuberay/)
kubectl apply -f ray-operator/config/samples/ray-cluster.tls.yaml

Expand Down
Loading

0 comments on commit e4e8727

Please sign in to comment.