You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With custom values its whowing below error
^^^^^^^^^^^^^^^^^^^^^^
File "yaml/_yaml.pyx", line 673, in yaml._yaml.CParser.get_single_node
File "yaml/_yaml.pyx", line 687, in yaml._yaml.CParser._compose_document
File "yaml/_yaml.pyx", line 731, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 845, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 731, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 847, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 860, in yaml._yaml.CParser._parse_next_event
yaml.parser.ParserError: while parsing a block mapping
in "", line 3, column 3
did not find expected key
in "", line 33, column 5
fatal: [localhost]: FAILED! => {
"changed": false
Used Below values in helm chart deployment.
certManager:
DaemonSet or Deployment
kind: DaemonSet
replicaCount: 1
image:
repository: joeelliott/cert-exporter
# The default tag is ".Chart.AppVersion", only set "tag" to override that
tag:
pullPolicy: IfNotPresent
command: ["./app"]
args:
- --secrets-annotation-selector=cert-manager.io/certificate-name
- --secrets-include-glob=*.crt
- --logtostderr
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# cannot be empty
additionalLabels:
prometheus.io/load-rule: "true"
## Scrape interval. If not set, the Prometheus default scrape interval is used.
##
interval: 20s
## metric relabel configs to apply to samples before ingestion.
##
metricRelabelings: []
- action: keep
regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
sourceLabels: [__name__]
# relabel configs to apply to samples before ingestion.
##
relabelings: []
- sourceLabels: [__meta_kubernetes_pod_node_name]
separator: ;
regex: ^(.*)$
targetLabel: nodename
replacement: $1
action: replace
rbac:
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: cert-exporter
clusterRole:
# New role to grant to the service account
create: true
# Annotations to add to the service account
annotations: {}
# Rules for the Role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
With custom values its whowing below error
^^^^^^^^^^^^^^^^^^^^^^
File "yaml/_yaml.pyx", line 673, in yaml._yaml.CParser.get_single_node
File "yaml/_yaml.pyx", line 687, in yaml._yaml.CParser._compose_document
File "yaml/_yaml.pyx", line 731, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 845, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 731, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 847, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 860, in yaml._yaml.CParser._parse_next_event
yaml.parser.ParserError: while parsing a block mapping
in "", line 3, column 3
did not find expected key
in "", line 33, column 5
fatal: [localhost]: FAILED! => {
"changed": false
Used Below values in helm chart deployment.
certManager:
DaemonSet or Deployment
kind: DaemonSet
replicaCount: 1
image:
repository: joeelliott/cert-exporter
# The default tag is ".Chart.AppVersion", only set "tag" to override that
tag:
pullPolicy: IfNotPresent
command: ["./app"]
args:
- --secrets-annotation-selector=cert-manager.io/certificate-name
- --secrets-include-glob=*.crt
- --logtostderr
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
environment: prod
prometheus.io/scrape: true
prometheus.io/port: 8080
prometheus.io/path: /metrics
podSecurityContext: {}
fsGroup: 2000
securityContext: {}
# capabilities:
# drop:s
# - ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
volumes: []
- name: kubelet
hostPath:
path: /var/lib/kubelet
type: Directory
volumeMounts: []
- mountPath: /var/lib/kubelet/pki
mountPropagation: HostToContainer
name: kubelet
readOnly: true
service:
type: ClusterIP
port: 8081
portName: http-metrics
Annotations to add to the service
annotations: {}
Requires prometheus-operator to be installed
serviceMonitor:
create: true
rbac:
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: cert-exporter
clusterRole:
# New role to grant to the service account
create: true
# Annotations to add to the service account
annotations: {}
# Rules for the Role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
clusterRoleBinding:
create: true
dashboards:
Labels to add to all dashboard ConfigMaps
additionalLabels:
grafana_dashboard: "1"
certManagerDashboard:
create: true
namespace: cert-exporter
The text was updated successfully, but these errors were encountered: