Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e: use helm to install out-of-tree cloud-provider-azure #2209

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ kind.kubeconfig
minikube.kubeconfig
kubeconfig
!kubeconfig/
*.kubeconfig

# ssh keys
.ssh*
Expand Down
27 changes: 24 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ KUBECTL_VER := v1.22.4
KUBECTL_BIN := kubectl
KUBECTL := $(TOOLS_BIN_DIR)/$(KUBECTL_BIN)-$(KUBECTL_VER)

HELM_VER := v3.8.1
HELM_BIN := helm
HELM := $(TOOLS_BIN_DIR)/$(HELM_BIN)-$(HELM_VER)

YQ_VER := v4.14.2
YQ_BIN := yq
YQ := $(TOOLS_BIN_DIR)/$(YQ_BIN)-$(YQ_VER)
Expand Down Expand Up @@ -186,6 +190,7 @@ clean-bin: ## Remove all generated binaries.
clean-temporary: ## Remove all temporary files and folders.
rm -f minikube.kubeconfig
rm -f kubeconfig
rm -f *.kubeconfig

.PHONY: clean-release
clean-release: ## Remove the release folder.
Expand Down Expand Up @@ -236,6 +241,9 @@ verify-tiltfile: ## Verify Tiltfile format.

##@ Development:

.PHONY: install-tools # populate hack/tools/bin
install-tools: $(ENVSUBST) $(KUSTOMIZE) $(KUBECTL) $(HELM) $(GINKGO)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unifying these installs together made it a lot easier to standardize dep installs across the various jobs and scripts.

Should I go ahead and add the remaining set of tools in hack/tools/bin to this list for best practice going forward (despite the fact that not every invocation of make install-tools will require every single binary)?

cc @CecileRobertMichon @jsturtevant

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

despite the fact that not every invocation of make install-tools will require every single binary

I'm weak -1 on this, even if it improves the makefile a little, it would make running, say make go-test, a lot slower the first time and install a bunch of binaries that aren't needed, I slightly prefer the "just in time" install approach.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, yeah, that's the tradeoff.

Any concerns w/ this new install-tools target here? The practical problem is we have > 1 (actually like 3 or 4) places in > 1 file that need the above (or at least 4 of 5) installed, and during this effort I encountered how easy it is to miss a dep when adding a new one if the only option is to copy/paste those individual tool install commands around.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also mildly opposed to this change if it adds any noticeable time to testing targets. Could we maybe have the new target install just the tools that are actually shared, or is this still a maintenance headache?

install-tools: $(ENVSUBST) $(KUSTOMIZE) $(KUBECTL)

test-e2e-run: generate-e2e-templates install-tools $(GINKGO)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only test-e2e-run and tilt-up have been updated to include this new target, so I think that's a good trade-off (those are flows that are already sort of heavyweight and shouldn't slow down CI overall)


.PHONY: create-management-cluster
create-management-cluster: $(KUSTOMIZE) $(ENVSUBST) ## Create a management cluster.
# Create kind management cluster.
Expand Down Expand Up @@ -545,7 +553,7 @@ release: clean-release ## Builds and push container images using the latest git

.PHONY: release-manifests
release-manifests: $(KUSTOMIZE) $(RELEASE_DIR) ## Builds the manifests to publish with a release.
kustomize build config/default > $(RELEASE_DIR)/infrastructure-components.yaml
$(KUSTOMIZE) build config/default > $(RELEASE_DIR)/infrastructure-components.yaml

.PHONY: release-templates
release-templates: $(RELEASE_DIR)
Expand Down Expand Up @@ -616,7 +624,7 @@ test-cover: envs-test $(KUBECTL) $(KUBE_APISERVER) $(ETCD) ## Run tests with cod
go tool cover -html=coverage.out -o coverage.html

.PHONY: test-e2e-run
test-e2e-run: generate-e2e-templates $(ENVSUBST) $(KUSTOMIZE) $(KUBECTL) $(GINKGO) ## Run e2e tests.
test-e2e-run: generate-e2e-templates install-tools ## Run e2e tests.
$(ENVSUBST) < $(E2E_CONF_FILE) > $(E2E_CONF_FILE_ENVSUBST) && \
$(GINKGO) -v -trace -tags=e2e -focus="$(GINKGO_FOCUS)" -skip="$(GINKGO_SKIP)" -nodes=$(GINKGO_NODES) --noColor=$(GINKGO_NOCOLOR) $(GINKGO_ARGS) ./test/e2e -- \
-e2e.artifacts-folder="$(ARTIFACTS)" \
Expand Down Expand Up @@ -668,7 +676,7 @@ kind-create: $(KUBECTL) ## Create capz kind cluster if needed.
./scripts/kind-with-registry.sh

.PHONY: tilt-up
tilt-up: $(ENVSUBST) $(KUSTOMIZE) $(KUBECTL) kind-create ## Start tilt and build kind cluster if needed.
tilt-up: install-tools kind-create ## Start tilt and build kind cluster if needed.
EXP_CLUSTER_RESOURCE_SET=true EXP_AKS=true EXP_MACHINE_POOL=true tilt up

.PHONY: delete-cluster
Expand Down Expand Up @@ -698,6 +706,7 @@ release-notes: $(RELEASE_NOTES) ## Build a local copy of release notes.
goapi-diff: $(GO_APIDIFF) ## Build a local copy of go api-diff.
ginkgo: $(GINKGO) ## Build a local copy of ginkgo.
kubectl: $(KUBECTL) ## Build a local copy of kubectl.
helm: $(HELM) ## Build a local copy of helm.
yq: $(YQ) ## Build a local copy of yq.

$(CONVERSION_VERIFIER): go.mod
Expand Down Expand Up @@ -740,12 +749,24 @@ $(KUBECTL): ## Build kubectl from tools folder.
ln -sf $(KUBECTL) $(TOOLS_BIN_DIR)/$(KUBECTL_BIN)
chmod +x $(KUBECTL) $(TOOLS_BIN_DIR)/$(KUBECTL_BIN)

$(HELM): ## Put helm into tools folder.
mkdir -p $(TOOLS_BIN_DIR)
rm -f "$(TOOLS_BIN_DIR)/$(HELM_BIN)*"
curl -fsSL -o $(TOOLS_BIN_DIR)/get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
jackfrancis marked this conversation as resolved.
Show resolved Hide resolved
chmod 700 $(TOOLS_BIN_DIR)/get_helm.sh
USE_SUDO=false HELM_INSTALL_DIR=$(TOOLS_BIN_DIR) DESIRED_VERSION=$(HELM_VER) BINARY_NAME=$(HELM_BIN)-$(HELM_VER) $(TOOLS_BIN_DIR)/get_helm.sh
ln -sf $(HELM) $(TOOLS_BIN_DIR)/$(HELM_BIN)
rm -f $(TOOLS_BIN_DIR)/get_helm.sh

.PHONY: $(ENVSUBST_BIN)
$(ENVSUBST_BIN): $(ENVSUBST)

.PHONY: $(KUBECTL_BIN)
$(KUBECTL_BIN): $(KUBECTL)

.PHONY: $(HELM_BIN)
$(HELM_BIN): $(HELM)

.PHONY: $(GO_APIDIFF_BIN)
$(GO_APIDIFF_BIN): $(GO_APIDIFF)

Expand Down
9 changes: 7 additions & 2 deletions Tiltfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

envsubst_cmd = "./hack/tools/bin/envsubst"
kubectl_cmd = "./hack/tools/bin/kubectl"
helm_cmd = "./hack/tools/bin/helm"
tools_bin = "./hack/tools/bin"

#Add tools to path
Expand Down Expand Up @@ -331,9 +332,13 @@ def deploy_worker_templates(template, substitutions):
yaml = yaml.replace("${" + substitution + "}", value)

yaml = yaml.replace('"', '\\"') # add escape character to double quotes in yaml
flavor_name = os.path.basename(flavor)
flavor_cmd = "RANDOM=$(bash -c 'echo $RANDOM'); CLUSTER_NAME=" + flavor.replace("windows", "win") + "-$RANDOM; make generate-flavors; echo \"" + yaml + "\" > ./.tiltbuild/" + flavor + "; cat ./.tiltbuild/" + flavor + " | " + envsubst_cmd + " | " + kubectl_cmd + " apply -f - && echo \"Cluster \'$CLUSTER_NAME\' created, don't forget to delete\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to suggest this would be better as an f-string, but then I remembered this is Starlark.

if "external-cloud-provider" in flavor_name:
flavor_cmd += "; until " + kubectl_cmd + " get secret ${CLUSTER_NAME}-kubeconfig > /dev/null 2>&1; do sleep 5; done; " + kubectl_cmd + " get secret ${CLUSTER_NAME}-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./${CLUSTER_NAME}.kubeconfig; chmod 600 ./${CLUSTER_NAME}.kubeconfig; until " + kubectl_cmd + " --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes > /dev/null 2>&1; do sleep 5; done; " + helm_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME}"
local_resource(
name = os.path.basename(flavor),
cmd = "RANDOM=$(bash -c 'echo $RANDOM'); CLUSTER_NAME=" + flavor.replace("windows", "win") + "-$RANDOM; make generate-flavors; echo \"" + yaml + "\" > ./.tiltbuild/" + flavor + "; cat ./.tiltbuild/" + flavor + " | " + envsubst_cmd + " | " + kubectl_cmd + " apply -f - && echo \"Cluster \'$CLUSTER_NAME\' created, don't forget to delete\"",
name = flavor_name,
cmd = flavor_cmd,
auto_init = False,
trigger_mode = TRIGGER_MODE_MANUAL,
labels = ["flavors"],
Expand Down
51 changes: 47 additions & 4 deletions docs/book/src/developers/kubernetes-developers.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,56 @@ export CLUSTER_TEMPLATE="test/dev/cluster-template-custom-builds.yaml"

## Testing the out-of-tree cloud provider

To test changes made to the [Azure cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), first build and push images for cloud-controller-manager and/or cloud-node-manager from the root of the cloud-provider-azure repo.
To test changes made to the [Azure cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), first build and push images for cloud-controller-manager and/or cloud-node-manager from the branch of the cloud-provider-azure repo that the desired changes are in. Based on the repository, image name, and image tag you produce from your custom image build and push, set the appropriate environment variables below:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CecileRobertMichon FYI a new set of documentation changes since your most recent review

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nilo19 @lzhecheng could you please review this part?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks!


```bash
$ export IMAGE_REGISTRY=docker.io/myusername
$ export CCM_IMAGE_NAME=azure-cloud-controller-manager
$ export CNM_IMAGE_NAME=azure-node-controller-manager
$ export IMAGE_TAG=canary
```

Then, use the `external-cloud-provider` flavor to create a cluster:

```bash
AZURE_CLOUD_CONTROLLER_MANAGER_IMG=myrepo/my-ccm:v0.0.1 \
AZURE_CLOUD_NODE_MANAGER_IMG=myrepo/my-cnm:v0.0.1 \
CLUSTER_TEMPLATE=cluster-template-external-cloud-provider.yaml \
$ export CLUSTER_NAME=my-cluster
$ CLUSTER_TEMPLATE=cluster-template-external-cloud-provider.yaml \
make create-workload-cluster
```

Once your cluster deploys, you should receive the kubeconfig to the workload cluster. Set your `KUBECONFIG` environment variable to point to the kubeconfig file, then use the official cloud-provider-azure Helm chart to deploy the cloud-provider-azure components using your custom built images:

```bash
$ helm install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \
--set cloudNodeManager.imageTag="${IMAGE_TAG}"
```

The helm command above assumes that you want to test custom images of both cloud-controller-manager and cloud-node-manager. If you only wish to test one component, you may omit the other one referenced in the example above to produce the desired `helm install` command (for example, if you wish to only test a custom cloud-controller-manager image, omit the three `--set cloudNodeManager...` arguments above).

Once you have installed the components via Helm, you should see the relevant pods running in your test cluster under the `kube-system` namespace. To iteratively develop on this test cluster, you may manually edit the `cloud-controller-manager` Deployment resource, and/or the `cloud-node-manager` Daemonset resource delivered via `helm install`. Or you may issue follow-up `helm` commands with each test iteration. For example:

```bash
$ export IMAGE_TAG=canaryv2
$ helm upgrade --install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \
--set cloudNodeManager.imageTag="${IMAGE_TAG}"
$ export IMAGE_TAG=canaryv3
$ helm upgrade --install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} \
--set cloudControllerManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudNodeManager.imageRepository="${IMAGE_REGISTRY}" \
--set cloudControllerManager.imageName="${CCM_IMAGE_NAME}" \
--set cloudNodeManager.imageName="${CNM_IMAGE_NAME}" \
--set cloudControllerManager.imageTag="${IMAGE_TAG}" \
--set cloudNodeManager.imageTag="${IMAGE_TAG}"
```

Each successive `helm upgrade --install` command will release a new version of the chart, which will have the effect of replacing the Deployment and/or Daemonset image configurations (and thus replace the pods running in the cluster) with the new image version built and pushed for each test iteration.
14 changes: 13 additions & 1 deletion docs/book/src/topics/cloud-provider-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,15 @@ All cloud provider config values can be customized by creating the `${RESOURCE}-

To deploy a cluster using [external cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure), create a cluster configuration with the [external cloud provider template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/cluster-template-external-cloud-provider.yaml).

After deploying the cluster, you should eventually see a set of pods like the following in a `Running` state:
After the cluster has provisioned, install the `cloud-provider-azure` components using the official helm chart:

```bash
helm install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME}
```

The Helm chart will pick the right version of `cloud-controller-manager` and `cloud-node-manager` to work with the version of Kubernetes your cluster is running.

After running `helm install`, you should eventually see a set of pods like these in a `Running` state:

```bash
kube-system cloud-controller-manager 1/1 Running 0 41s
Expand All @@ -81,6 +89,10 @@ kube-system cloud-node-manager-mfsdg
kube-system cloud-node-manager-qrz74 1/1 Running 0 24s
```

For more information see the official [`cloud-provider-azure` helm chart documentation](https://github.com/kubernetes-sigs/cloud-provider-azure/tree/master/helm/cloud-provider-azure).

If you're not familiar with using Helm to manage Kubernetes applications as packages, there's lots of good [Helm documentation on the official website](https://helm.sh/docs/).

## Storage Drivers

### Azure File CSI Driver
Expand Down
Loading