Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple Ingresses #89

Closed
gregjones opened this issue Apr 29, 2020 · 5 comments · Fixed by #103
Closed

Support for multiple Ingresses #89

gregjones opened this issue Apr 29, 2020 · 5 comments · Fixed by #103

Comments

@gregjones
Copy link
Contributor

Short version: We have a growing number of users who would like to have multiple Ingresses for their fiaas apps, because they want different annotations on them, that control visibility/access. Adding 'annotations' as a property to the items in the list of 'ingresses' allows us to support this, while keeping things simpler/compatible with the current behaviour by using this 'annotations' field as the distinguisher for when to group and when to separate hosts/paths into an Ingress object.

ingress:
  - host: myservice-internal.myns-pro.cluster.com
    annotations:
      kubernetes.io/ingress.class: internal
      nginx.ingress.kubernetes.io/whitelist-source-range: “10.x.y.z/32”
  - host: myservice-public.myns-pro.cluster.com
    annotations:
      kubernetes.io/ingress.class: public
      nginx.ingress.kubernetes.io/whitelist-source-range: “215.a.b.c/32”

I've written up a longer version that explains the behaviour in a little more detail, with examples showing the impact (or lack of impact in the cases for backwards-compatibility): https://docs.google.com/document/d/12-hrIgxMEYJDw8qMeUzDar8xa_iV4Tc2CqfpVRo9WoE/edit?usp=sharing
(It's public for commenting/suggesting, let me know what emails to add if you'd like to edit)

@mortenlj
Copy link
Member

Would the suggested design in #8 solve the use cases you have here?

While the design in #8 seems to be a bigger change, I think I would prefer it as it limits the amount of configuration needed in each application config. How often do you think the whitelist ranges will be different per application? Would it be a better solution to configure those at the namespace level, removing that consideration from the applications?

Implementation wise, I have a feeling it would be better from a code maintainability standpoint to split the ingress creation out from deploy-daemon and put that in a separate operator that can be developed, tested and deployed independently, although that would possibly also require extending skipper to deploy more components (which in my head was always the plan, but not currently implemented).

@gregjones
Copy link
Contributor Author

It isn't explicitly stated there that multiple 'zones' could be specified for the app, by host; assuming it can, and that the 'rules' for applying them match what we've proposed, resulting in multiple ingresses, then I think it could be set up to do what our users want.

The list of VPN/Office IPs will be quite common, but then normally each service has some extras they add on top for the external or legacy services that need to connect. So I expect in quite a few cases we will end up with 1 zone per application. Some kind of 'inheritance' could make this easier for users in some cases, but that seems like it could get complicated very easily.

Introducing a new operator seems a little daunting, I think having it handled by skipper would be important, but managing the upgrade and rollout of this while maintaining compatibility still seems seems like a challenge.

Did you think about how this would work with the ApplicationStatus, and what it would mean for the Application to become 'ready' when it's deployed?

@mortenlj
Copy link
Member

The list of VPN/Office IPs will be quite common, but then normally each service has some extras they add on top for the external or legacy services that need to connect. So I expect in quite a few cases we will end up with 1 zone per application. Some kind of 'inheritance' could make this easier for users in some cases, but that seems like it could get complicated very easily.

This sounds more like the need is a service mesh, not just whitelisting of IPs on the ingress. I agree that making a zone per application isn't a solution, so if that truly is a need, then the zone model probably won't be enough.

My worry is that by putting too much flexibility into the application config, we are slowly moving towards a system that is just a different way of writing all the YAML supported by Kubernetes, at which point it would be better to just ditch FIAAS entirely and just use plain Kubernetes.

Introducing a new operator seems a little daunting, I think having it handled by skipper would be important, but managing the upgrade and rollout of this while maintaining compatibility still seems seems like a challenge.

I think in most cases, ingresses can be handled independently of deployment and services. Only exception is if the developer is changing the service port, but that would actually be a problem already. With that in mind, handling ingresses in a specialised operator wouldn't be much harder than today I believe. Of course, versioning and releases of the operators needs to be independent, so you can update the version of deploy-daemon independently of the version of the ingress operator. Which probably means the UX around promoting versions needs to be improved.

Did you think about how this would work with the ApplicationStatus, and what it would mean for the Application to become 'ready' when it's deployed?

Good point, I don't think we thought much about that. I think perhaps the status object wasn't as developed as it is now when we worked on the ideas in #8 . It would certainly need to be explored in detail before settling on a solution.

@gregjones
Copy link
Contributor Author

This sounds more like the need is a service mesh, not just whitelisting of IPs on the ingress. I agree that making a zone per application isn't a solution, so if that truly is a need, then the zone model probably won't be enough.

I think a service-mesh is a lot more complicated than the reality of the problem we're looking at.

My worry is that by putting too much flexibility into the application config, we are slowly moving towards a system that is just a different way of writing all the YAML supported by Kubernetes, at which point it would be better to just ditch FIAAS entirely and just use plain Kubernetes.

Definitely true, but I'm not sure this is that point. The risk of not being flexible enough is that we won't have people using fiaas, which I think would be a shame. The change to support this in the deploy-daemon is quite small, I'll look at opening a PR so that it is clearer what the implications actually are.

@gregjones
Copy link
Contributor Author

I opened the PR for this: #98

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants