Argo CD is unable to connect with Private GKE #6553
Replies: 12 comments 26 replies
-
You can actually add a cluster without using the cli. A cluster is configured using json in a Secret with some labels to tell argocd that this is the conf of your new cluster. The json contains the token of the service account which you can get from the target cluster. You should already have that done based on the logs you posted. You also need the uri of the api server and the caBundle and I think thats it. I am currently on my phone and not 100% sure about all the details but we are doing more or less the same thing as you and we don‘t use the cli at all. This part is not so well documented, most ppl should use the cli but its definately possible. Also, I believe that now there are some NetworkPolicies by default. The trap with NPs is that they work like a whitelist BUT if there are none defined the default is to allow all traffic which is counter intuitive. Since there are now NPs they drop any traffic that is not explicitly defined. You should try to add a NP that allows argocd to egress to your other clusters. Then exec into the argocd container and test if you can reach the other clusters. |
Beta Was this translation helpful? Give feedback.
-
Try something like this : apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: argocd-to-external-cluster
namespace: argocd
spec:
podSelector: {}
egress:
- to:
- ipBlock:
cidr: 34.xxx.xxx.xxx/32
ports:
- port: 443 This allows all pods in the argocd namespace to connect with your IP on port 443. For production you should change the podSelector to only select the actual pod but for testing this should be ok. There is a great tool to create NPs if you are not familiar with the syntax Next, you should just exec into the pod and try to ping the target server. NPs are just firewall rules managed by k8s, you should treat this like any other networking issue. It's also possible that you need to add some networking rules in GCP to make it work. If I remember correctly there is kubectl in the argocd docker image, if you exec into the container you can manually create the ~/.kube/config file for your target cluster and see if it works. |
Beta Was this translation helpful? Give feedback.
-
Hey @meier-christoph . Thanks for your suggestion. We have created the same NP in kubernetes cluster as you mentioned but it didn't work, even after adding firewalls given below, it didn't work ArgoCD cluster project firewall rule:
Target private GKE cluster project firewall rule:
For verification |
Beta Was this translation helpful? Give feedback.
-
I'm struggling with this as well. I've downgraded to ArgoCD 2.0.0 thinking it would something NP related, but it actually isn't. Based on Google official doc, when creating manually the argocd secret, the server section must be the private endpoint IP (which can be obtained with a Still, the status of the cluster in the ArgoCD UI is "Unknown", which is not what we expect. Firewall rules on GCP allow TCP 443 communication between nodes and master plane. Workloads are deployed correctly, thus I'd exclude a firewall issue on GCP side. Could we have more support on this issue? Edit: adding a firewall rule in GCP with allow all protocols for all machines in the network coming from 0.0.0.0/0 still doesn't solve the issue. |
Beta Was this translation helpful? Give feedback.
-
I found the same issue on on-prem kubernetes 1.19, when connecting in the same cluster from slave1 to master. Applying the NP rule above is not resoving the issue. Does it mean when in the same cluster sa.yaml above will do the job of "argocd cluster add kubernetes-admin@kubernetes"? INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" |
Beta Was this translation helpful? Give feedback.
-
Having the same problem on GKE. Already we have created authorized network for the clusters. All the node ips and API server IPs are in that range. |
Beta Was this translation helpful? Give feedback.
-
Well I started looking into this after having used ArgoCD in multiple different companies. Looks like GKE Private clusters do not allow network peering for the control plane. see https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept#the_control_plane_in_private_clusters
|
Beta Was this translation helpful? Give feedback.
-
Having the same problem and i figure out how to set here is my scene just go VPC network -> External IP Address and find ur GKE-A ip |
Beta Was this translation helpful? Give feedback.
-
#9496 [manage clusters via proxy] may be a solution for private cluster |
Beta Was this translation helpful? Give feedback.
-
when creating a private gke cluster did you happen to specify master authorized networks? make sure the argocd node/cloud nat is allowed to enter the newly created target gke cluster |
Beta Was this translation helpful? Give feedback.
-
If anyone is still looking for a solution, https://github.com/zhang-xuebin/argo-helm/tree/host-on-gke is an example. You can use ArgoCD to manage Private GKE clusters. VPC-peering, master Authorized Networking, are NOT needed. See also: https://cloud.google.com/blog/products/containers-kubernetes/connect-gateway-with-argocd |
Beta Was this translation helpful? Give feedback.
-
Hi, i always have the problem, i'm using argo 2.10, maybe something that a i don't understand any help would be appreciated |
Beta Was this translation helpful? Give feedback.
-
PROBLEM STATEMENT:
SUMMARY:
STEPS FOLLOWED:
ERROR:
We are getting below error while adding the private GKE cluster via argocd CLI
DEBUG INFORMATION:
Cloud - GCP
Argo version- v2.0.2
GKE version- 1.19.10-gke.1600
Private GKE cluster features enabled:
P.S: This is an urgent Issue. Please help
Beta Was this translation helpful? Give feedback.
All reactions