Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Capture ARCH details in multiarch network perf test scneario #151

Open
SachinNinganure opened this issue Sep 10, 2024 · 12 comments · May be fixed by #152
Open

[RFE] Capture ARCH details in multiarch network perf test scneario #151

SachinNinganure opened this issue Sep 10, 2024 · 12 comments · May be fixed by #152
Labels
enhancement New feature or request

Comments

@SachinNinganure
Copy link
Contributor

To capture the Arch details in case of the multiarch scenarios

AMD and X86

test link for instance
https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/56028/rehearse-56028-pull-ci-openshift-qe-ocp-qe-perfscale-ci-main-aws-4.17-nightly-multi-data-path-9nodes/1833129570308460544

@SachinNinganure SachinNinganure added the enhancement New feature or request label Sep 10, 2024
@jtaleric
Copy link
Member

Please open against the GoCommons repo. This seems relevant across the toolset. cc @rsevilla87 @vishnuchalla @chentex

@jtaleric
Copy link
Member

@rsevilla87
Copy link
Member

correct, this data should be already available

@jtaleric
Copy link
Member

@SachinNinganure going to close this out unless you see we are missing this?

@SachinNinganure
Copy link
Contributor Author

@krishvoor Could you please check on this

@SachinNinganure
Copy link
Contributor Author

it is collecting Master and Worker node info but the additional-workers

@krishvoor
Copy link
Member

@jtaleric Sachin is attempting k8s-netperf on a multi-arch worker [both ARM & x86_64] node cluster setup.
Are we accounting/capturing all the ARCH in the worker nodes?

@rsevilla87
Copy link
Member

I've realized that the metadata collection performed by the ocp-metadata library from go-commons gets the worker architecture from the install-config, meaning that the nodes architecture collected won't be 100% correct in multi-arch cases like this.

Fortunately, k8s-netperf also grabs and indexes the labels from the nodes where the client/server pods run, among these indexed labels you can find serverNodeLabels.kubernetes.io/arch and clientNodeLabels.kubernetes.io/arch which contain the arch of the nodes where the client/server pods run

@krishvoor
Copy link
Member

I've realized that the metadata collection performed by the ocp-metadata library from go-commons gets the worker architecture from the install-config, meaning that the nodes architecture collected won't be 100% correct in multi-arch cases like this.

Thanks for the insights @rsevilla87

Fortunately, k8s-netperf also grabs and indexes the labels from the nodes where the client/server pods run, among these indexed labels you can find serverNodeLabels.kubernetes.io/arch and clientNodeLabels.kubernetes.io/arch which contain the arch of the nodes where the client/server pods run

Guess this isn't the case across other tools (ingress-perf/kube-burner)?
I still think we need to add capability to collect 100% correct multi-arch cases - thoughts?

@jtaleric
Copy link
Member

I've realized that the metadata collection performed by the ocp-metadata library from go-commons gets the worker architecture from the install-config, meaning that the nodes architecture collected won't be 100% correct in multi-arch cases like this.

Thanks for the insights @rsevilla87

Fortunately, k8s-netperf also grabs and indexes the labels from the nodes where the client/server pods run, among these indexed labels you can find serverNodeLabels.kubernetes.io/arch and clientNodeLabels.kubernetes.io/arch which contain the arch of the nodes where the client/server pods run

Guess this isn't the case across other tools (ingress-perf/kube-burner)? I still think we need to add capability to collect 100% correct multi-arch cases - thoughts?

IMHO we cannot rely on the labels as we have found with CNV it creates an explosion of labels on nodes. We need to be specific in what we collect.

I would recommend someone open a PR to add arch information on the node the server and client landon -- not all of the nodes -- that is useless because only the server and the client are important to us.

wdyt?

@rsevilla87 rsevilla87 linked a pull request Sep 17, 2024 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants