Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to properly route when using K8S #154

Open
gogo199432 opened this issue Mar 21, 2023 · 4 comments
Open

Unable to properly route when using K8S #154

gogo199432 opened this issue Mar 21, 2023 · 4 comments

Comments

@gogo199432
Copy link

gogo199432 commented Mar 21, 2023

I'm having a weird problem when trying to use this in my K8S cluster. The frontend can never communicate with the backend when using the service name or even cluster ip of the backend service. I get either NS_ERROR_UNKOWN_HOST or NS_ERROR_BAD_URI errors. However if I switch my backend service from ClusterIP to LoadBalancer and use that IP, it works fine.
The issue here is that I cannot hardcode the loadbalancer IP into my config since it could change depending on the whims of MetalLB. So I need the frontend to be able to resolve the dns name instead of only IP.
I don't know if this is an issue with the implementation or what, but I'm all out of ideas, have spent the last day or two trying to make it work. I'll attach my K8S files under that I use for deployment. (No reverse-proxy, local only)

Main deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ganymede
spec:
  selector:
    matchLabels:
      app: ganymede
  template:
    metadata:
      labels:
        app: ganymede
    spec:
      containers:
      - name: ganymede
        image: ghcr.io/zibbp/ganymede:latest
        imagePullPolicy: Always
        envFrom:
        - secretRef:
            name: ganymedesecrets
            optional: false
        ports:
        - containerPort: 4000
        volumeMounts:
        - name: data
          mountPath: "/data"
          subPath: "data"
        - name: data
          mountPath: "/logs"
          subPath: "logs"
        - mountPath: "/vods"
          name: media
          subPath: "vods"
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: ganymede-pvc
        - name: media
          persistentVolumeClaim:  
            claimName: media-nfs-claim
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ganymede-frontend
spec:
  selector:
    matchLabels:
      app: ganymede-frontend
  template:
    metadata:
      labels:
        app: ganymede-frontend
    spec:
      containers:
      - name: ganymede-frontend
        image: ghcr.io/zibbp/ganymede-frontend:latest
        envFrom:
        - configMapRef:
            name: ganymedefrontendconfig
            optional: false
        ports:
        - containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ganymede-nginx
spec:
  selector:
    matchLabels:
      app: ganymede-nginx
  template:
    metadata:
      labels:
        app: ganymede-nginx
    spec:
      containers:
      - name: ganymede-nginx
        image: nginx
        ports:
        - containerPort: 8080
        volumeMounts:
        - mountPath: "/vods"
          name: media
          subPath: "vods"
        - mountPath: /etc/nginx
          readOnly: true
          name: config
      volumes:
        - name: config
          configMap:
            name: ganymedenginxconfig
            items:
              - key: nginx.conf
                path: nginx.conf
        - name: media
          persistentVolumeClaim:  
            claimName: media-nfs-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ganymede-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: ganymede-service
spec:
  selector:
    app: ganymede
  ports:
  - port: 80
    targetPort: 4000
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: ganymede-frontend-service
spec:
  selector:
    app: ganymede-frontend
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  name: ganymede-nginx-service
spec:
  selector:
    app: ganymede-nginx
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP

Backend config

apiVersion: v1
kind: Secret
metadata:
  name: ganymedesecrets
  namespace: default
type: Opaque
stringData: 
  TZ: "Europe/Vienna"
  DB_HOST: "192.168.1.6"
  DB_PORT: "5432"
  DB_USER: "ganymede"
  DB_PASS: "SOMESECRET"
  DB_NAME: "ganymede"
  DB_SSL: "disable"
  JWT_SECRET: "STUFF"
  JWT_REFRESH_SECRET: "MORESTUFF"
  TWITCH_CLIENT_ID: "NOTTELLINGYOU"
  TWITCH_CLIENT_SECRET: "SECRET"
  FRONTEND_HOST: "http://ganymede-frontend-service"

Frontend config

apiVersion: v1
kind: ConfigMap
metadata:
  name: ganymedefrontendconfig
data:
  API_URL: "http://ganymede-service" # Points to the API service
  CDN_URL: "http://ganymede-nginx-service" # Points to the CDN service
  SHOW_SSO_LOGIN_BUTTON: "false" # show/hide SSO login button on login page
  FORCE_SSO_AUTH: "false" # force SSO auth for all users (bypasses login page and redirects to SSO)
  REQUIRE_LOGIN: "false" # require login to view videos

Nginx config

apiVersion: v1
kind: ConfigMap
metadata:
  name: ganymedenginxconfig
data:
  nginx.conf: |
    worker_processes auto;
    worker_rlimit_nofile 65535;
    error_log  /var/log/nginx/error.log info;
    pid        /var/run/nginx.pid;

    events {
      multi_accept       on;
      worker_connections 65535;
    }

    http {

      sendfile on;
      sendfile_max_chunk 1m;
      tcp_nopush on;
      tcp_nodelay on;

      keepalive_timeout 65;
      gzip on;

      server {
        listen 8080;
        root /mnt/vods;

        add_header 'Access-Control-Allow-Origin' '*' always;
        add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
        add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
        add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;

        location ^~ /vods {
          autoindex on;
          alias /mnt/vods;

          location ~* \.(ico|css|js|gif|jpeg|jpg|png|svg|webp)$ {
              expires 30d;
              add_header Pragma "public";
              add_header Cache-Control "public";
        }
          location ~* \.(mp4)$ {
              add_header Content-Type "video/mp4";
              add_header 'Access-Control-Allow-Origin' '*' always;
              add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
              add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
              add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
          }
        }
      }
    }
@gogo199432
Copy link
Author

Interestingly if I run just a temporary alpine container and curl the backend service it works perfectly fine.
kubectl run my-shell --rm -i --tty --image alpine
apk --update add curl
curl http://garymende-service:80

@Zibbp
Copy link
Owner

Zibbp commented Mar 21, 2023

Hi,
The frontend container makes use of server side and client side rendering to make it more performant. This means both the container and the client's device needs to be able to access the API through the same host defined in API_URL.

In my Kubernetes cluster, all containers are deployed with a ClusterIP type. I also use a reverse proxy though, which may be why mine works?

Here's my frontend service

apiVersion: v1
kind: Service
metadata:
  name: ganymede-frontend-service
spec:
  selector:
    app: ganymede
    deployment: ganymede-frontend
  ports:
    - name: ganymede-frontend-http
      port: 3000
      targetPort: 3000
      protocol: TCP
  type: ClusterIP

Then my ingress controller which handle reverse proxying using ingress-nginx

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ganymede-ingress
spec:
  ingressClassName: nginx
  rules:
    - host: ganymede.lab.domain.net
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ganymede-frontend-service
                port:
                  number: 3000
    - host: api.ganymede.lab.domain.net
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ganymede-api-service
                port:
                  number: 4000
    - host: cdn.ganymede.lab.domain.net
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ganymede-nginx-service
                port:
                  number: 8080
  tls:
    - hosts:
        - ganymede.lab.domain.net
        - api.ganymede.lab.domain.net
        - cdn.ganymede.lab.domain.net
      secretName: lab-domain-net-tls

Then for the config map, the API_URL points to the reverse proxy address of the API service

configMapGenerator:
  - name: ganymede-frontend-config
    literals:
      - API_URL=https://api.ganymede.lab.zibbp.net
      - CDN_URL=https://cdn.ganymede.lab.zibbp.net
      - SHOW_SSO_LOGIN_BUTTON=true
      - FORCE_SSO_AUTH=false
      - REQUIRE_LOGIN=false

I'm not the most knowledge with Kubernetes so I may be missing your point and issue. Let me know if this helps.

@gogo199432
Copy link
Author

Yes this would probably work, since you are basically exposing your service to the net (kind-of) and routing trafik through your dns resolver. So it doesn't actually use the local k8s service connection. Kind of like instead of going to the neighbour to deliver a letter, you send it through the post. It works, but it's more work. This would probably also work for me, but would prefer to keep it local only. I don't know React good enough to know why it behaves like this to help out tho.

(Take everything I said with a grain of salt, I'm working on intuition here :D )

@Zibbp
Copy link
Owner

Zibbp commented Mar 21, 2023

The reverse proxy is available only locally. I have a dns record set to *.local.domain.net which points to the Kubernetes reverse proxy. It's much nicer to visit ganymede.local.domain.net then to remember an IP and port.
It would be nicer to use the service only but the way I have the frontend container won't allow that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants