Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[victoria-metrics-k8s-stack] 0.24.5 and 0.25.3 helm delete #1308

Open
Dofamin opened this issue Aug 23, 2024 · 11 comments
Open

[victoria-metrics-k8s-stack] 0.24.5 and 0.25.3 helm delete #1308

Dofamin opened this issue Aug 23, 2024 · 11 comments

Comments

@Dofamin
Copy link

Dofamin commented Aug 23, 2024

  • kubectl version
    Client Version: v1.28.9
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.9

  • helm version
    version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean",
    GoVersion:"go1.18.10"}

When trying delete release, get error

> helm delete victoria-metrics-k8s-stack -n test --debug           
uninstall.go:95: [debug] uninstall: Deleting victoria-metrics-k8s-stack
client.go:133: [debug] creating 1 resource(s)
Error: warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed: serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists
helm.go:84: [debug] serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists
warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed
helm.sh/helm/v3/pkg/action.(*Configuration).execHook
        helm.sh/helm/v3/pkg/action/hooks.go:79
helm.sh/helm/v3/pkg/action.(*Uninstall).Run
        helm.sh/helm/v3/pkg/action/uninstall.go:102
main.newUninstallCmd.func2
        helm.sh/helm/v3/cmd/helm/uninstall.go:56
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:1044
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:968
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:250
runtime.goexit
        runtime/asm_arm64.s:1270

but it's will uninstall without hooks

> helm delete victoria-metrics-k8s-stack -n test --debug --no-hooks
uninstall.go:95: [debug] uninstall: Deleting victoria-metrics-k8s-stack
uninstall.go:106: [debug] delete hooks disabled for victoria-metrics-k8s-stack
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubedns" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-coredns" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-etcd" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-controller-manager" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-proxy" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler" Service
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" Deployment
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Deployment
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" DaemonSet
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" RoleBinding
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Role
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ClusterRoleBinding
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ClusterRoleBinding
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ClusterRole
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ClusterRole
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ServiceAccount
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack" ServiceAccount
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" ServiceAccount
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ServiceAccount
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" ServiceMonitor
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack" VMAgent
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubelet" VMNodeScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-probes" VMNodeScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-cadvisor" VMNodeScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-etcd" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-vmsingle" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-alertmanager.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-general.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.podowner" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-slos" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-node-network" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-prometheus-node-recording.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-vmagent" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-scheduler" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-burnrate.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-node.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubelet.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-availability.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-vm-health" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-histogram.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containercpuusagesecondstotal" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-storage" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-controller-manager" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-apps" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryworkingsetbytes" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-apiserver" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-vmcluster" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-resources" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryswap" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-prometheus-general.rules" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-kubelet" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemorycache" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containerresource" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryrss" VMRule
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-operator" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-etcd" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-apiserver" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kubedns" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-proxy" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-controller-manager" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter" VMServiceScrape
client.go:477: [debug] Starting delete for "victoria-metrics-k8s-stack-coredns" VMServiceScrape
uninstall.go:148: [debug] purge requested for victoria-metrics-k8s-stack
release "victoria-metrics-k8s-stack" uninstalled
@Dofamin
Copy link
Author

Dofamin commented Aug 23, 2024

same problem with 0.25.3

helm version
version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}

> helm delete victoria-metrics-k8s-stack -n test --debug
uninstall.go:102: [debug] uninstall: Deleting victoria-metrics-k8s-stack
client.go:142: [debug] creating 1 resource(s)
Error: warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed: 1 error occurred:
        * serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists


helm.go:84: [debug] 1 error occurred:
        * serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists


warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed
helm.sh/helm/v3/pkg/action.(*Configuration).execHook
        helm.sh/helm/v3/pkg/action/hooks.go:80
helm.sh/helm/v3/pkg/action.(*Uninstall).Run
        helm.sh/helm/v3/pkg/action/uninstall.go:109
main.newUninstallCmd.func2
        helm.sh/helm/v3/cmd/helm/uninstall.go:60
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:983
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:1115
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:1039
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:271
runtime.goexit
        runtime/asm_arm64.s:1222
> helm delete victoria-metrics-k8s-stack -n test --debug --no-hooks
uninstall.go:102: [debug] uninstall: Deleting victoria-metrics-k8s-stack
uninstall.go:113: [debug] delete hooks disabled for victoria-metrics-k8s-stack
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubedns" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-coredns" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-proxy" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-controller-manager" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-etcd" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" Deployment
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Deployment
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" DaemonSet
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" RoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" Role
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ClusterRoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ClusterRoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ClusterRole
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ClusterRole
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator-validation" Secret
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-prometheus-node-exporter" ServiceMonitor
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack" VMAgent
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-probes" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubelet" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-cadvisor" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-alertmanager.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containerresource" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.podowner" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-burnrate.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containercpuusagesecondsto" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-vmsingle" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-etcd" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-vm-health" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryswap" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-scheduler" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-node.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-prometheus-general.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemorycache" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryrss" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-histogram.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-prometheus-node-recording.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-k8s.rules.containermemoryworkingsetb" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-slos" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-apiserver-availability.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-resources" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-vmagent" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-node-network" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-vmcluster" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-controller-manager" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-vmoperator" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-apiserver" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubelet.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-storage" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-system-kubelet" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubernetes-apps" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-general.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-coredns" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-apiserver" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-proxy" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-controller-manager" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-scheduler" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kubedns" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-node-exporter" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-state-metrics" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-operator" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-kube-etcd" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-victoria-metrics-operator-admission" ValidatingWebhookConfiguration
uninstall.go:155: [debug] purge requested for victoria-metrics-k8s-stack
release "victoria-metrics-k8s-stack" uninstalled

@Dofamin Dofamin changed the title [victoria-metrics-k8s-stack] 0.24.5 helm delete [victoria-metrics-k8s-stack] 0.24.5 and 0.25.3 helm delete Aug 23, 2024
@AndrewChubatiuk
Copy link
Contributor

AndrewChubatiuk commented Aug 23, 2024

wasn't able to reproduce on kind. which kubernetes version are you using? are there any peculiarities of your setup? what values are you using to setup?

@Dofamin
Copy link
Author

Dofamin commented Aug 24, 2024

kubernetes version

1.28.9

tbh about peculiarities i don't know, maybe cloud provider adding little spice to this cluster who knows, if you can
point me where i can get information about peculiarities i will give you informantion
values in my opinion is not relevant

this problems occurs even with default values, but with default values this chart cannot be installed, see issue 1307

global:
  license:
    key: ""
    keyRef:
      {}
      # name: secret-license
      # key: license

nameOverride: ""
fullnameOverride: ""
tenant: "0"
# -- If this chart is used in "Argocd" with "releaseName" field then
# -- VMServiceScrapes couldn't select the proper services.
# -- For correct working need set value 'argocdReleaseOverride=$ARGOCD_APP_NAME'
argocdReleaseOverride: ""

# -- victoria-metrics-operator dependency chart configuration.
# -- For possible values refer to https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator#parameters
# -- also checkout here possible ENV variables to configure operator behaviour https://docs.victoriametrics.com/operator/vars.html
victoria-metrics-operator:
  enabled: true
  # -- Tells helm to clean up vm cr resources when uninstalling
  cleanupCRD: true
  cleanupImage:
    repository: bitnami/kubectl
    # use image tag that matches k8s API version by default
    # tag: 1.29.6
    pullPolicy: IfNotPresent

  createCRD: false # we disable crd creation by operator chart as we create them in this chart
  operator:
    # -- By default, operator converts prometheus-operator objects.
    disable_prometheus_converter: false

serviceAccount:
  # -- Specifies whether a service account should be created
  create: true
  # -- Annotations to add to the service account
  annotations: {}
  # -- The name of the service account to use.
  # -- If not set and create is true, a name is generated using the fullname template
  name: ""

## -- Create default rules for monitoring the cluster
defaultRules:
  create: true

  # -- Common properties for VMRule groups
  group:
    spec:
      # -- Optional HTTP URL parameters added to each rule request
      params: {}

  # -- Common properties for all VMRules
  rule:
    spec:
      # -- Additional labels for all VMRules
      labels: {}
      # -- Additional annotations for all VMRules
      annotations: {}

  # -- Common properties for VMRules alerts
  alerting:
    spec:
      # -- Additional labels for VMRule alerts
      labels: {}
      # -- Additional annotations for VMRule alerts
      annotations: {}

  # -- Common properties for VMRules recording rules
  recording:
    spec:
      # -- Additional labels for VMRule recording rules
      labels: {}
      # -- Additional annotations for VMRule recording rules
      annotations: {}

  # -- Per rule properties
  rules:
    {}
    # CPUThrottlingHigh:
    #   create: true
    #   spec:
    #     for: 15m
    #     labels:
    #       severity: critical
  groups:
    etcd:
      create: true
      # -- Common properties for all rules in a group
      rules: {}
      # spec:
      #   annotations:
      #     dashboard: https://example.com/dashboard/1
    general:
      create: true
      rules: {}
    k8sContainerMemoryRss:
      create: true
      rules: {}
    k8sContainerMemoryCache:
      create: true
      rules: {}
    k8sContainerCpuUsageSecondsTotal:
      create: true
      rules: {}
    k8sPodOwner:
      create: true
      rules: {}
    k8sContainerResource:
      create: true
      rules: {}
    k8sContainerMemoryWorkingSetBytes:
      create: true
      rules: {}
    k8sContainerMemorySwap:
      create: true
      rules: {}
    kubeApiserver:
      create: true
      rules: {}
    kubeApiserverAvailability:
      create: true
      rules: {}
    kubeApiserverBurnrate:
      create: true
      rules: {}
    kubeApiserverHistogram:
      create: true
      rules: {}
    kubeApiserverSlos:
      create: true
      rules: {}
    kubelet:
      create: true
      rules: {}
    kubePrometheusGeneral:
      create: true
      rules: {}
    kubePrometheusNodeRecording:
      create: true
      rules: {}
    kubernetesApps:
      create: true
      rules: {}
      targetNamespace: ".*"
    kubernetesResources:
      create: true
      rules: {}
    kubernetesStorage:
      create: true
      rules: {}
      targetNamespace: ".*"
    kubernetesSystem:
      create: true
      rules: {}
    kubernetesSystemKubelet:
      create: true
      rules: {}
    kubernetesSystemApiserver:
      create: true
      rules: {}
    kubernetesSystemControllerManager:
      create: true
      rules: {}
    kubeScheduler:
      create: true
      rules: {}
    kubernetesSystemScheduler:
      create: true
      rules: {}
    kubeStateMetrics:
      create: true
      rules: {}
    nodeNetwork:
      create: true
      rules: {}
    node:
      create: true
      rules: {}
    vmagent:
      create: true
      rules: {}
    vmsingle:
      create: true
      rules: {}
    vmcluster:
      create: true
      rules: {}
    vmHealth:
      create: true
      rules: {}
    vmoperator:
      create: true
      rules: {}
    alertmanager:
      create: true
      rules: {}

  # -- Runbook url prefix for default rules
  runbookUrl: https://runbooks.prometheus-operator.dev/runbooks

  # -- Labels for default rules
  labels: {}
  # -- Annotations for default rules
  annotations: {}

## -- Create default dashboards
defaultDashboardsEnabled: false

## -- Create experimental dashboards
experimentalDashboardsEnabled: false

## -- Create dashboards as CRDs (reuqires grafana-operator to be installed)
grafanaOperatorDashboardsFormat:
  enabled: false
  instanceSelector:
    matchLabels:
      dashboards: "grafana"
  allowCrossNamespaceImport: false

# Provide custom recording or alerting rules to be deployed into the cluster.
additionalVictoriaMetricsMap:
#    rule-name:
#     groups:
#     - name: my_group
#       rules:
#       - record: my_record
#         expr: 100 * my_record

externalVM:
  read:
    url: ""
    # bearerTokenSecret:
    #   name: dbaas-read-access-token
    #   key: bearerToken
  write:
    url: "http://prod-srv-vm:8428/api/v1/write"
    # bearerTokenSecret:
    #   name: dbaas-read-access-token
    #   key: bearerToken

##############

# -- Configures vmsingle params
vmsingle:
  annotations: {}
  enabled: false
  # spec for VMSingle crd
  # https://docs.victoriametrics.com/operator/api.html#vmsinglespec
  spec:
    image:
      tag: v1.102.1
    # -- Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these [docs](https://docs.victoriametrics.com/single-server-victoriametrics/#retention)
    retentionPeriod: "1"
    replicaCount: 1
    extraArgs: {}
    storage:
      storageClassName: csi-ceph-ssd-gz1
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - vmsingle.domain.com
    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths: []
    # - path: /*
    #   backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    ## Or for k8s > 1.19
    # - path: /*
    #   pathType: Prefix
    #   backend:
    #     service:
    #       name: ssl-redirect
    #       port:
    #         name: service
    tls: []
    #  - secretName: vmsingle-ingress-tls
    #    hosts:
    #      - vmsingle.domain.com

vmcluster:
  enabled: false
  annotations: {}
  # spec for VMCluster crd
  # https://docs.victoriametrics.com/operator/api.html#vmclusterspec
  spec:
    # -- Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these [docs](https://docs.victoriametrics.com/single-server-victoriametrics/#retention)
    retentionPeriod: "1"
    replicationFactor: 2
    vmstorage:
      image:
        tag: v1.102.1-cluster
      replicaCount: 2
      storageDataPath: "/vm-data"
      storage:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: 10Gi
      resources:
        {}
        # limits:
        #   cpu: "1"
        #   memory: 1500Mi
    vmselect:
      image:
        tag: v1.102.1-cluster
      replicaCount: 2
      cacheMountPath: "/select-cache"
      extraArgs: {}
      storage:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: 2Gi
      resources:
        {}
        # limits:
        #   cpu: "1"
        #   memory: "1000Mi"
        # requests:
        #   cpu: "0.5"
        #   memory: "500Mi"
    vminsert:
      image:
        tag: v1.102.1-cluster
      replicaCount: 2
      extraArgs: {}
      resources:
        {}
        # limits:
        #   cpu: "1"
        #   memory: 1000Mi
        # requests:
        #   cpu: "0.5"
        #   memory: "500Mi"
  ingress:
    storage:
      enabled: false
      # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
      # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
      # ingressClassName: nginx
      # Values can be templated
      annotations:
        {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      labels: {}
      path: /
      # pathType is only for k8s > 1.19
      pathType: Prefix

      hosts:
        - vmstorage.domain.com
      ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
      extraPaths: []
      # - path: /*
      #   backend:
      #     serviceName: ssl-redirect
      #     servicePort: use-annotation
      ## Or for k8s > 1.19
      # - path: /*
      #   pathType: Prefix
      #   backend:
      #     service:
      #       name: ssl-redirect
      #       port:
      #         name: service
      tls: []
      #  - secretName: vmstorage-ingress-tls
      #    hosts:
      #      - vmstorage.domain.com
    select:
      enabled: false
      # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
      # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
      # ingressClassName: nginx
      # Values can be templated
      annotations:
        {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      labels: {}
      path: /
      # pathType is only for k8s > 1.19
      pathType: Prefix

      hosts:
        - vmselect.domain.com
      ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
      extraPaths: []
      # - path: /*
      #   backend:
      #     serviceName: ssl-redirect
      #     servicePort: use-annotation
      ## Or for k8s > 1.19
      # - path: /*
      #   pathType: Prefix
      #   backend:
      #     service:
      #       name: ssl-redirect
      #       port:
      #         name: service
      tls: []
      #  - secretName: vmselect-ingress-tls
      #    hosts:
      #      - vmselect.domain.com
    insert:
      enabled: false
      # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
      # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
      # ingressClassName: nginx
      # Values can be templated
      annotations:
        {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      labels: {}
      path: /
      # pathType is only for k8s > 1.19
      pathType: Prefix

      hosts:
        - vminsert.domain.com
      ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
      extraPaths: []
      # - path: /*
      #   backend:
      #     serviceName: ssl-redirect
      #     servicePort: use-annotation
      ## Or for k8s > 1.19
      # - path: /*
      #   pathType: Prefix
      #   backend:
      #     service:
      #       name: ssl-redirect
      #       port:
      #         name: service
      tls: []
      #  - secretName: vminsert-ingress-tls
      #    hosts:
      #      - vminsert.domain.com

alertmanager:
  enabled: false
  annotations: {}
  # spec for VMAlertmanager crd
  # https://docs.victoriametrics.com/operator/api.html#vmalertmanagerspec
  spec:
    selectAllByDefault: true
    image:
      tag: v0.25.0
    externalURL: ""
    routePrefix: /

    # if this one defined, it will be used for alertmanager configuration and config parameter will be ignored
    # configSecret: "alertmanager-config"

  config:
    templates:
      - "/etc/vm/configs/**/*.tmpl"
    route:
      # group_by: ["alertgroup", "job"]
      # group_wait: 30s
      # group_interval: 5m
      # repeat_interval: 12h
      receiver: "blackhole"
      ## routes:
      ###################################################
      ## Duplicate code_owner routes to teams
      ## These will send alerts to team channels but continue
      ## processing through the rest of the tree to handled by on-call
      # - matchers:
      #     - code_owner_channel!=""
      #     - severity=~"info|warning|critical"
      #   group_by: ["code_owner_channel", "alertgroup", "job"]
      #   receiver: slack-code-owners
      # ###################################################
      # ## Standard on-call routes
      # - matchers:
      #     - severity=~"info|warning|critical"
      #   receiver: slack-monitoring
      #   continue: true

    # inhibit_rules:
    #   - target_matchers:
    #       - severity=~"warning|info"
    #     source_matchers:
    #       - severity=critical
    #     equal:
    #       - cluster
    #       - namespace
    #       - alertname
    #   - target_matchers:
    #       - severity=info
    #     source_matchers:
    #       - severity=warning
    #     equal:
    #       - cluster
    #       - namespace
    #       - alertname
    #   - target_matchers:
    #       - severity=info
    #     source_matchers:
    #       - alertname=InfoInhibitor
    #     equal:
    #       - cluster
    #       - namespace

    receivers:
      - name: blackhole
      # - name: "slack-monitoring"
      #   slack_configs:
      #     - channel: "#channel"
      #       send_resolved: true
      #       title: '{{ template "slack.monzo.title" . }}'
      #       icon_emoji: '{{ template "slack.monzo.icon_emoji" . }}'
      #       color: '{{ template "slack.monzo.color" . }}'
      #       text: '{{ template "slack.monzo.text" . }}'
      #       actions:
      #         - type: button
      #           text: "Runbook :green_book:"
      #           url: "{{ (index .Alerts 0).Annotations.runbook_url }}"
      #         - type: button
      #           text: "Query :mag:"
      #           url: "{{ (index .Alerts 0).GeneratorURL }}"
      #         - type: button
      #           text: "Dashboard :grafana:"
      #           url: "{{ (index .Alerts 0).Annotations.dashboard }}"
      #         - type: button
      #           text: "Silence :no_bell:"
      #           url: '{{ template "__alert_silence_link" . }}'
      #         - type: button
      #           text: '{{ template "slack.monzo.link_button_text" . }}'
      #           url: "{{ .CommonAnnotations.link_url }}"
      # - name: slack-code-owners
      #   slack_configs:
      #     - channel: "#{{ .CommonLabels.code_owner_channel }}"
      #       send_resolved: true
      #       title: '{{ template "slack.monzo.title" . }}'
      #       icon_emoji: '{{ template "slack.monzo.icon_emoji" . }}'
      #       color: '{{ template "slack.monzo.color" . }}'
      #       text: '{{ template "slack.monzo.text" . }}'
      #       actions:
      #         - type: button
      #           text: "Runbook :green_book:"
      #           url: "{{ (index .Alerts 0).Annotations.runbook }}"
      #         - type: button
      #           text: "Query :mag:"
      #           url: "{{ (index .Alerts 0).GeneratorURL }}"
      #         - type: button
      #           text: "Dashboard :grafana:"
      #           url: "{{ (index .Alerts 0).Annotations.dashboard }}"
      #         - type: button
      #           text: "Silence :no_bell:"
      #           url: '{{ template "__alert_silence_link" . }}'
      #         - type: button
      #           text: '{{ template "slack.monzo.link_button_text" . }}'
      #           url: "{{ .CommonAnnotations.link_url }}"
      #
  # better alert templates for slack
  # source https://gist.github.com/milesbxf/e2744fc90e9c41b47aa47925f8ff6512
  monzoTemplate:
    enabled: true

  # extra alert templates
  templateFiles:
    {}
    # template_1.tmpl: |-
    #   {{ define "hello" -}}
    #   hello, Victoria!
    #   {{- end }}
    # template_2.tmpl: ""

  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - alertmanager.domain.com
    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths: []
    # - path: /*
    #   backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    ## Or for k8s > 1.19
    # - path: /*
    #   pathType: Prefix
    #   backend:
    #     service:
    #       name: ssl-redirect
    #       port:
    #         name: service
    tls: []
    #  - secretName: alertmanager-ingress-tls
    #    hosts:
    #      - alertmanager.domain.com

vmalert:
  annotations: {}
  enabled: false

  # Controls whether VMAlert should use VMAgent or VMInsert as a target for remotewrite
  remoteWriteVMAgent: false
  # spec for VMAlert crd
  # https://docs.victoriametrics.com/operator/api.html#vmalertspec
  spec:
    selectAllByDefault: true
    image:
      tag: v1.102.1
    evaluationInterval: 15s

    # External labels to add to all generated recording rules and alerts
    externalLabels: {}

  # extra vmalert annotation templates
  templateFiles:
    {}
    # template_1.tmpl: |-
    #   {{ define "hello" -}}
    #   hello, Victoria!
    #   {{- end }}
    # template_2.tmpl: ""

  ## additionalNotifierConfigs allows to configure static notifiers, discover notifiers via Consul and DNS,
  ## see specification in https://docs.victoriametrics.com/vmalert/#notifier-configuration-file.
  ## This configuration will be created as separate secret and mounted to vmalert pod.
  additionalNotifierConfigs:
    {}
    # dns_sd_configs:
    #   - names:
    #       - my.domain.com
    #     type: 'A'
    #     port: 9093

  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - vmalert.domain.com
    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths: []
    # - path: /*
    #   backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    ## Or for k8s > 1.19
    # - path: /*
    #   pathType: Prefix
    #   backend:
    #     service:
    #       name: ssl-redirect
    #       port:
    #         name: service
    tls: []
    #  - secretName: vmalert-ingress-tls
    #    hosts:
    #      - vmalert.domain.com

vmagent:
  enabled: true
  annotations: {}
  # https://docs.victoriametrics.com/operator/api.html#vmagentremotewritespec
  # defined spec will be added to the remoteWrite configuration of VMAgent
  additionalRemoteWrites:
    []
    #- url: http://some-remote-write/api/v1/write
  # spec for VMAgent crd
  # https://docs.victoriametrics.com/operator/api.html#vmagentspec
  spec:
    selectAllByDefault: true
    image:
      tag: v1.102.1
    scrapeInterval: 20s
    externalLabels:
      cluster: "dev-infra"
      # For multi-cluster setups it is useful to use "cluster" label to identify the metrics source.
      # For example:
      # cluster: cluster-name
    extraArgs:
      promscrape.streamParse: "true"
      # Do not store original labels in vmagent's memory by default. This reduces the amount of memory used by vmagent
      # but makes vmagent debugging UI less informative. See: https://docs.victoriametrics.com/vmagent/#relabel-debug
      promscrape.dropOriginalLabels: "false"
      promscrape.noStaleMarkers: "true"
  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - vmagent.domain.com
    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths: []
    # - path: /*
    #   backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    ## Or for k8s > 1.19
    # - path: /*
    #   pathType: Prefix
    #   backend:
    #     service:
    #       name: ssl-redirect
    #       port:
    #         name: service
    tls: []
    #  - secretName: vmagent-ingress-tls
    #    hosts:
    #      - vmagent.domain.com

#################################################
###              dependencies               #####
#################################################
# Grafana dependency chart configuration. For possible values refer to https://github.com/grafana/helm-charts/tree/main/charts/grafana#configuration
grafana:
  enabled: false
  ## all values for grafana helm chart can be specified here
  sidecar:
    datasources:
      enabled: true
      initDatasources: true
      createVMReplicasDatasources: false
      # JSON options for VM datasources
      # See https://grafana.com/docs/grafana/latest/administration/provisioning/#json-data
      jsonData: {}
      #  timeInterval: "1m"
    dashboards:
      additionalDashboardLabels: {}
      additionalDashboardAnnotations: {}
      enabled: true
      multicluster: false

  ## ForceDeployDatasource Create datasource configmap even if grafana deployment has been disabled
  forceDeployDatasource: false
  # Set to false to disable the default datasource connected directly to victoria-metrics
  provisionDefaultDatasource: true

  ## Configure additional grafana datasources (passed through tpl)
  ## ref: http://docs.grafana.org/administration/provisioning/#datasources
  additionalDataSources: []
  # - name: prometheus-sample
  #   access: proxy
  #   basicAuth: true
  #   basicAuthPassword: pass
  #   basicAuthUser: daco
  #   editable: false
  #   jsonData:
  #       tlsSkipVerify: true
  #   orgId: 1
  #   type: prometheus
  #   url: https://{{ printf "%s-prometheus.svc" .Release.Name }}:9090
  #   version: 1

  dashboardProviders:
    dashboardproviders.yaml:
      apiVersion: 1
      providers:
        - name: "default"
          orgId: 1
          folder: ""
          type: file
          disableDeletion: false
          editable: true
          options:
            path: /var/lib/grafana/dashboards/default

  dashboards:
    default:
      nodeexporter:
        gnetId: 1860
        revision: 22
        datasource: VictoriaMetrics

  defaultDashboardsTimezone: utc

  # Enabling VictoriaMetrics Datasource in Grafana. See more details here: https://github.com/VictoriaMetrics/grafana-datasource/blob/main/README.md#victoriametrics-datasource-for-grafana
  # Note that Grafana will need internet access to install the datasource plugin.
  # Uncomment the block below, if you want to enable VictoriaMetrics Datasource in Grafana:
  #plugins:
  #  - "https://github.com/VictoriaMetrics/grafana-datasource/releases/download/v0.5.0/victoriametrics-datasource-v0.5.0.zip;victoriametrics-datasource"
  #grafana.ini:
  #  plugins:
  #    # Why VictoriaMetrics datasource is unsigned: https://github.com/VictoriaMetrics/grafana-datasource/blob/main/README.md#why-victoriametrics-datasource-is-unsigned
  #    allow_loading_unsigned_plugins: victoriametrics-datasource

  # Change datasource type in dashboards from Prometheus to VictoriaMetrics.
  # you can use `victoriametrics-datasource` instead of `prometheus` if enabled VictoriaMetrics Datasource above
  defaultDatasourceType: "prometheus"

  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Values can be templated
    annotations:
      {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - grafana.domain.com
    ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
    extraPaths: []
    # - path: /*
    #   backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    ## Or for k8s > 1.19
    # - path: /*
    #   pathType: Prefix
    #   backend:
    #     service:
    #       name: ssl-redirect
    #       port:
    #         name: service
    tls: []
    #  - secretName: grafana-ingress-tls
    #    hosts:
    #      - grafana.domain.com

  vmServiceScrape:
    # whether we should create a service scrape resource for grafana
    enabled: true

    # spec for VMServiceScrape crd
    # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
    spec: {}

# prometheus-node-exporter dependency chart configuration. For possible values refer to https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-node-exporter/values.yaml
prometheus-node-exporter:
  enabled: true
  namespaceOverride: kube-system
  ## all values for prometheus-node-exporter helm chart can be specified here
  podLabels:
    ## Add the 'node-exporter' label to be used by serviceMonitor to match standard common usage in rules and grafana dashboards
    ##
    jobLabel: Node-Exporter
  extraArgs:
    - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
    - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$

  vmServiceScrape:
    # whether we should create a service scrape resource for node-exporter
    enabled: true

    # spec for VMServiceScrape crd
    # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
    spec:
      jobLabel: jobLabel
      endpoints:
        - port: metrics
          metricRelabelConfigs:
            - action: drop
              source_labels: [mountpoint]
              regex: "/var/lib/kubelet/pods.+"

# kube-state-metrics dependency chart configuration. For possible values refer to https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-state-metrics/values.yaml
kube-state-metrics:
  enabled: true
  ## all values for kube-state-metrics helm chart can be specified here

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  vmServiceScrape:
    spec: {}

#################################################
###              Service Monitors           #####
#################################################
## Component scraping the kubelets
kubelet:
  enabled: true

  # -- Enable scraping /metrics/cadvisor from kubelet's service
  cadvisor: true
  # -- Enable scraping /metrics/probes from kubelet's service
  probes: true
  # spec for VMNodeScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmnodescrapespec
  spec:
    scheme: "https"
    honorLabels: true
    interval: "30s"
    scrapeTimeout: "5s"
    tlsConfig:
      insecureSkipVerify: true
      caFile: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
    bearerTokenFile: "/var/run/secrets/kubernetes.io/serviceaccount/token"
    # drop high cardinality label and useless metrics for cadvisor and kubelet
    metricRelabelConfigs:
      - action: labeldrop
        regex: (uid)
      - action: labeldrop
        regex: (id|name)
      - action: drop
        source_labels: [__name__]
        regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
    relabelConfigs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - sourceLabels: [__metrics_path__]
        targetLabel: metrics_path
      - targetLabel: "job"
        replacement: "kubelet"
    # ignore timestamps of cadvisor's metrics by default
    # more info here https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4697#issuecomment-1656540535
    honorTimestamps: false
# -- Component scraping the kube api server
kubeApiServer:
  enabled: true
  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    endpoints:
      - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        # bearerTokenSecret:
        #   key: ""
        port: https
        scheme: https
        tlsConfig:
          caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          serverName: kubernetes
    jobLabel: component
    namespaceSelector:
      matchNames:
        - default
    selector:
      matchLabels:
        component: apiserver
        provider: kubernetes

# -- Component scraping the kube controller manager
kubeControllerManager:
  enabled: true

  ## If your kube controller manager is not deployed as a pod, specify IPs it can be found on
  ##
  endpoints: []
  # - 10.141.4.22
  # - 10.141.4.23
  # - 10.141.4.24

  ## If using kubeControllerManager.endpoints only the port and targetPort are used
  ##
  service:
    enabled: true
    port: 10257
    targetPort: 10257
    selector:
      component: kube-controller-manager

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    jobLabel: jobLabel
    endpoints:
      - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        # bearerTokenSecret:
        #   key: ""
        port: http-metrics
        scheme: https
        tlsConfig:
          caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          serverName: kubernetes

# -Component scraping kubeDns. Use either this or coreDns
kubeDns:
  enabled: true
  service:
    enabled: true
    dnsmasq:
      port: 10054
      targetPort: 10054
    skydns:
      port: 10055
      targetPort: 10055
    selector:
      k8s-app: kube-dns
  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    endpoints:
      - port: http-metrics-dnsmasq
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
      - port: http-metrics-skydns
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

# -- Component scraping coreDns. Use either this or kubeDns
coreDns:
  enabled: true
  service:
    enabled: true
    port: 9153
    targetPort: 9153
    selector:
      k8s-app: kube-dns

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    jobLabel: jobLabel
    endpoints:
      - port: http-metrics
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

## Component scraping etcd
##
kubeEtcd:
  enabled: true

  ## If your etcd is not deployed as a pod, specify IPs it can be found on
  ##
  endpoints: []
  # - 10.141.4.22
  # - 10.141.4.23
  # - 10.141.4.24

  ## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used
  ##
  service:
    enabled: true
    port: 2379
    targetPort: 2379
    selector:
      component: etcd

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    jobLabel: jobLabel
    endpoints:
      - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        # bearerTokenSecret:
        #   key: ""
        port: http-metrics
        scheme: https
        tlsConfig:
          caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

## Component scraping kube scheduler
##
kubeScheduler:
  enabled: true

  ## If your kube scheduler is not deployed as a pod, specify IPs it can be found on
  ##
  endpoints: []
  # - 10.141.4.22
  # - 10.141.4.23
  # - 10.141.4.24

  ## If using kubeScheduler.endpoints only the port and targetPort are used
  ##
  service:
    enabled: true
    port: 10259
    targetPort: 10259
    selector:
      component: kube-scheduler

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    jobLabel: jobLabel
    endpoints:
      - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        # bearerTokenSecret:
        #   key: ""
        port: http-metrics
        scheme: https
        tlsConfig:
          caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

## Component scraping kube proxy
##
kubeProxy:
  enabled: true

  ## If your kube proxy is not deployed as a pod, specify IPs it can be found on
  ##
  endpoints: []
  # - 10.141.4.22
  # - 10.141.4.23
  # - 10.141.4.24

  service:
    enabled: true
    port: 10249
    targetPort: 10249
    selector:
      k8s-app: kube-proxy

  # spec for VMServiceScrape crd
  # https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec
  spec:
    jobLabel: jobLabel
    endpoints:
      - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        # bearerTokenSecret:
        #   key: ""
        port: http-metrics
        scheme: http
        tlsConfig:
          caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

## install vm operator crds
crds:
  enabled: true

## install prometheus operator crds
prometheus-operator-crds:
  enabled: false

# -- Add extra objects dynamically to this chart
extraObjects: []

@AndrewChubatiuk
Copy link
Contributor

probably it can be related to current hook policy

"helm.sh/hook-delete-policy": hook-succeeded

will update it in the next release

@AndrewChubatiuk
Copy link
Contributor

are you able to reproduce this issue on a latest version of a chart?

@Dofamin
Copy link
Author

Dofamin commented Sep 3, 2024

sorry for the delay
tomorrow can get access to cluster and try latest version of chart

@Dofamin
Copy link
Author

Dofamin commented Sep 4, 2024

@AndrewChubatiuk
setup new cluster one master node and 1 worker node
k8s version v1.28.9
helm version version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}
was used default values
with install getting error

Error: INSTALLATION FAILED: 3 errors occurred:
* vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" already exists
* vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" already exists
* vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" already exists

but install still happend
image

and getting errors with this error, probably 1 worker node is not sufficient

failed create or update single: origin_Err=cannot wait for deployment to become ready: context deadline exceeded,podPhase="Pending",conditions=name="PodScheduled",status="False",message="0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.."

can't be uninstalled thru helm, will stuck on uninstalling

uninstalling with no hooks do the job

uninstall.go:102: [debug] uninstall: Deleting victoria-metrics-k8s-stack-1725434031
uninstall.go:113: [debug] delete hooks disabled for victoria-metrics-k8s-stack-1725434031
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-scheduler" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-controller-manager" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-etcd" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-prometheus-node-exporter" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-core-dns" Service
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" Deployment
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" Deployment
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" Deployment
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-prometheus-node-exporter" DaemonSet
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" RoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" RoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" Role
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" Role
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana-clusterrolebinding" ClusterRoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" ClusterRoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" ClusterRoleBinding
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana-clusterrole" ClusterRole
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" ClusterRole
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" ClusterRole
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoriametrics-operator" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoriametrics-vmagent" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system-coredns" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-node-exporter-full" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoriametrics-backupman" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-alertmanager-monzo-tpl" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-views-namespac" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-alertmanager-overview" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana-config-dashboards" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-views-global" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-etcd" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoriametrics-vmalert" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-views-pods" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana-overview" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana-ds" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoriametrics-single-no" ConfigMap
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" Secret
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-alertmanager" Secret
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator-validation" Secret
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-prometheus-node-exporter" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" ServiceAccount
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031" VMAgent
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031" VMAlert
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031" VMAlertmanager
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-probes" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-cadvisor" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubelet" VMNodeScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-alertmanager.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containercpuusa" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.podowner" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-vmsingle" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-apiserver-burnrate.r" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-apiserver-histogram" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-prometheus-node-reco" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-apiserver-availabili" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-scheduler.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-prometheus-general.r" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-etcd" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containerresour" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system-schedul" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-apiserver-slos" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-node-exporter" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-vmagent" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-resources" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system-apiserv" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-vmcluster" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-vm-health" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-node.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system-control" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-storage" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubelet.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-general.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-node-network" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-vmoperator" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system-kubelet" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-node-exporter.rules" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-system" VMRule
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kubernetes-apps" VMRule
client.go:490: [debug] Ignoring delete failure for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" operator.victoriametrics.com/v1beta1, Kind=VMRule: vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" not found
client.go:490: [debug] Ignoring delete failure for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" operator.victoriametrics.com/v1beta1, Kind=VMRule: vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" not found
client.go:490: [debug] Ignoring delete failure for "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" operator.victoriametrics.com/v1beta1, Kind=VMRule: vmrules.operator.victoriametrics.com "victoria-metrics-k8s-stack-1725434031-k8s.rules.containermemory" not found
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-prometheus-node-exporter" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-api-server" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-controller-manager" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-scheduler" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-etcd" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-grafana" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-kube-state-metrics" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-core-dns" VMServiceScrape
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031" VMSingle
client.go:486: [debug] Starting delete for "victoria-metrics-k8s-stack-1725434031-victoria-metrics-operator-admission" ValidatingWebhookConfiguration
uninstall.go:155: [debug] purge requested for victoria-metrics-k8s-stack-1725434031
release "victoria-metrics-k8s-stack-1725434031" uninstalled

but pods still present
image

@AndrewChubatiuk
Copy link
Contributor

AndrewChubatiuk commented Sep 4, 2024

you can get get rid of error you're having during installation with a use of fullnameOverride variable, which can be set to have a smaller resource prefix, than victoria-metrics-k8s-stack-1725434031, which is crafted by default, cause fullname is then truncated to avoid reaching labels length limit.

and getting errors with this error, probably 1 worker node is not sufficient

how big is your worker node?

can't be uninstalled thru helm, will stuck on uninstalling

could you please uninstall with debug flag and share a log you're having?

@Dofamin
Copy link
Author

Dofamin commented Sep 4, 2024

how big is your worker node?
2 cpu 8 gb
could you please uninstall with debug flag and share a log you're having?
log for uninstalling which i sent earlier, already with debug option

@AndrewChubatiuk
Copy link
Contributor

AndrewChubatiuk commented Sep 4, 2024

log for uninstalling which i sent earlier, already with debug option

I mean with hook enabled

@Dofamin
Copy link
Author

Dofamin commented Sep 4, 2024

uninstall.go:102: [debug] uninstall: Deleting victoria-metrics-k8s-stack-1725434031
client.go:142: [debug] creating 1 resource(s)
Error: warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed: 1 error occurred:
        * serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists


helm.go:84: [debug] 1 error occurred:
        * serviceaccounts "victoria-metrics-k8s-stack-victoria-metrics-operator-cleanup-ho" already exists


warning: Hook pre-delete victoria-metrics-k8s-stack/charts/victoria-metrics-operator/templates/uninstall_hook.yaml failed
helm.sh/helm/v3/pkg/action.(*Configuration).execHook
        helm.sh/helm/v3/pkg/action/hooks.go:80
helm.sh/helm/v3/pkg/action.(*Uninstall).Run
        helm.sh/helm/v3/pkg/action/uninstall.go:109
main.newUninstallCmd.func2
        helm.sh/helm/v3/cmd/helm/uninstall.go:60
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:983
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:1115
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:1039
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:271
runtime.goexit
        runtime/asm_arm64.s:1222

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants