Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ANP: Add Support for Networks as Egress Peers #4235

Merged
merged 6 commits into from
Apr 22, 2024

Conversation

tssurya
Copy link
Member

@tssurya tssurya commented Mar 15, 2024

depends on #4164 (so ignore the 1st 5 commits here, they are part of 4164, start from commit 6 till 10)

- What this PR does and why is it needed
This PR adds support for ANP network peers (A networks peer is an array of network cidrs, it is supported only as an egress peer (northbound NOT southbound/ingress;)). The main PR is the last three commits. The first three commits are actually doing all the necessary plumbing and refactoring changes to accommodate that.

- Special notes for reviewers
OVN supports including CIDR blocks in address-sets but OVNKubernetes always treated address-sets as only a set of IPs. Currently since ANP has pods, namespaces, nodes, and now networks peers and each address set can have combo of IPs and CIDRs as peers, its time to change the plumbing around address-set machinery to ensure we are able to pass CIDRs as well. However during reviews be careful to ensure I at least do the parseIP and parseCIDR validations everywhere before I pass them as strings into the AddressSet machinery.

    _uuid               : 642413a7-21c6-42a5-a817-db9d31455716
    addresses           : ["10.244.0.3", "10.244.0.4", "10.244.0.5", "10.244.1.3", "10.244.2.3", "10.244.3.0/24", "172.19.0.3", "172.19.0.4", "172.19.0.5"]
    external_ids        : {direction=Egress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:node-as-egress-peers:Egress:1:v4", "k
    8s.ovn.org/name"=node-as-egress-peers, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a2074424610294966262

Pay attention to commit 3 where I change net.IP to string everywhere to ensure I have not thrown some required validation out the window.
- How to verify it
unit tests and e2e tests have been added

- Description for the changelog
Add support for networks peer

Copy link

netlify bot commented Mar 15, 2024

Deploy Preview for subtle-torrone-bb0c84 ready!

Name Link
🔨 Latest commit e43ae38
🔍 Latest deploy log https://app.netlify.com/sites/subtle-torrone-bb0c84/deploys/662625e5ea5dcc00085895bd
😎 Deploy Preview https://deploy-preview-4235--subtle-torrone-bb0c84.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@tssurya tssurya force-pushed the anp-cidr-selectors branch 2 times, most recently from c514c0c to bc17ec0 Compare March 16, 2024 11:28
@tssurya
Copy link
Member Author

tssurya commented Mar 25, 2024

@coveralls
Copy link

coveralls commented Mar 27, 2024

Coverage Status

coverage: 52.332% (-0.04%) from 52.369%
when pulling fce732c on tssurya:anp-cidr-selectors
into 2e46929 on ovn-org:master.

@tssurya tssurya force-pushed the anp-cidr-selectors branch 2 times, most recently from 31fad6c to 6d1bf84 Compare March 27, 2024 22:24
@tssurya
Copy link
Member Author

tssurya commented Mar 28, 2024

@jcaamano once we merge #4164 I will rebase on top, this is the next PR ready for your reviews.

@tssurya
Copy link
Member Author

tssurya commented Apr 2, 2024

there is one flake in one lane: https://github.com/ovn-org/ovn-kubernetes/actions/runs/8521709373/job/23341371557?pr=4235 saving link for investigation.

2024-04-02T12:39:10.9405652Z     apply.go:124: Creating node-and-cidr-as-peers-example AdminNetworkPolicy
2024-04-02T12:39:10.9510224Z === RUN   TestNetworkPolicyV2Conformance/AdminNetworkPolicyEgressNodePeers/Should_support_an_'allow-egress'_rule_policy_for_egress-node-peer
2024-04-02T12:39:11.0389444Z === RUN   TestNetworkPolicyV2Conformance/AdminNetworkPolicyEgressNodePeers/Should_support_a_'pass-egress'_rule_policy_for_egress-node-peer
2024-04-02T12:39:14.1111166Z     admin-network-policy-extended-egress-rules.go:126: FAILED Command was [/agnhost connect --timeout=3s --protocol=udp 172.18.0.4:5353]
2024-04-02T12:39:14.1114788Z     admin-network-policy-extended-egress-rules.go:126: Expected connection to succeed from network-policy-conformance-gryffindor/harry-potter-1 to 172.18.0.4, but instead it miserably failed. stderr: TIMEOUT: read udp 10.244.1.56:41292->172.18.0.4:5353: i/o timeout
2024-04-02T12:39:14.1118028Z     admin-network-policy-extended-egress-rules.go:128: 
2024-04-02T12:39:14.1120765Z         	Error Trace:	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/conformance/tests/admin-network-policy-extended-egress-rules.go:128
2024-04-02T12:39:14.1122528Z         	Error:      	Should be true
2024-04-02T12:39:14.1124552Z         	Test:       	TestNetworkPolicyV2Conformance/AdminNetworkPolicyEgressNodePeers/Should_support_a_'pass-egress'_rule_policy_for_egress-node-peer
2024-04-02T12:39:14.1127019Z === RUN   TestNetworkPolicyV2Conformance/AdminNetworkPolicyEgressNodePeers/Should_support_a_'deny-egress'_rule_policy_for_egress-node-peer
2024-04-02T12:39:23.3258621Z === NAME  TestNetworkPolicyV2Conformance/AdminNetworkPolicyEgressNodePeers
2024-04-02T12:39:23.3260465Z     apply.go:132: Deleting node-and-cidr-as-peers-example AdminNetworkPolicy

go-controller/go.mod Outdated Show resolved Hide resolved
go-controller/pkg/ovn/address_set/address_set.go Outdated Show resolved Hide resolved
go-controller/pkg/ovn/namespace.go Outdated Show resolved Hide resolved
go-controller/pkg/util/net.go Outdated Show resolved Hide resolved
@tssurya
Copy link
Member Author

tssurya commented Apr 9, 2024

thanks Jaime for the first pass! I am working through the comments, will push tomorrow; I want to finish up nadia's comments #4245 first and get that in before so that I can come back to this.

@tssurya
Copy link
Member Author

tssurya commented Apr 15, 2024

@jcaamano : PTAL

Copy link
Contributor

@jcaamano jcaamano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm another consideration, do we expect any performance impact in the OVN side of things due to having CIDRs in address sets? I know optimizations happened in relation to CIDRs, but not sure if address sets are covered there.

@tssurya
Copy link
Member Author

tssurya commented Apr 18, 2024

Hmmm another consideration, do we expect any performance impact in the OVN side of things due to having CIDRs in address sets? I know optimizations happened in relation to CIDRs, but not sure if address sets are covered there.

This is a great question and it is something I also want to know (already in my upcoming scale test plans). I have been waiting for some scale run results to see how we perform for this cc @rpattath and @lenahorsley (plan is to modify things post that; I need a rough idea on the following items from scale perspective: 1. central controller level driven locking system perf 2. current ACL, AS perf those are the two big things on my mind to try out)

meanwhile @dceara / @igsilya / @almusil are any of you aware of this ^ or has performed ovn heater tests for address-sets with CIDR blocks in it? Does it have any perf impact to put a CIDR range into an addressset?

so something like:

  1. ip4.dst == $address-set-ips+cidr-blocks versus
  2. ip4.dst == $address-set-ips || ip4.dst == CIDRBlock1 || ip4.dst == CIDRBlock2

are the two options we have in ovnk today for doing ACL matches, is there a one is better than the other approach ^ we need to consider?

@tssurya
Copy link
Member Author

tssurya commented Apr 18, 2024

jeez the re-tagging is not working and its not picking up my upstream fixes (sigh!) which is why the e2es are failing.. I am investigating this.

@dceara
Copy link
Contributor

dceara commented Apr 19, 2024

Hmmm another consideration, do we expect any performance impact in the OVN side of things due to having CIDRs in address sets? I know optimizations happened in relation to CIDRs, but not sure if address sets are covered there.

This is a great question and it is something I also want to know (already in my upcoming scale test plans). I have been waiting for some scale run results to see how we perform for this cc @rpattath and @lenahorsley (plan is to modify things post that; I need a rough idea on the following items from scale perspective: 1. central controller level driven locking system perf 2. current ACL, AS perf those are the two big things on my mind to try out)

meanwhile @dceara / @igsilya / @almusil are any of you aware of this ^ or has performed ovn heater tests for address-sets with CIDR blocks in it? Does it have any perf impact to put a CIDR range into an addressset?

We don't have relevant ovn-heater tests yet, unfortunately.

so something like:

1. ip4.dst == $address-set-ips+cidr-blocks versus

2. ip4.dst == $address-set-ips || ip4.dst == CIDRBlock1 || ip4.dst == CIDRBlock2

are the two options we have in ovnk today for doing ACL matches, is there a one is better than the other approach ^ we need to consider?

I think option 1, one large address set, will work better. We have this bug:

https://issues.redhat.com/browse/FDP-509

and that's partially addressed by @almusil in https://patchwork.ozlabs.org/project/ovn/patch/[email protected]/ with the limitation that:

This unfortunately doesn't help with the following flows:
"ip4.src == $as1 && ip4.dst == $as2"
"ip4.src == $as1 || ip4.dst == $as2"

@tssurya
Copy link
Member Author

tssurya commented Apr 19, 2024

the tests are not getting updated and I don't really understand why...

2024-04-18T18:48:02.8862865Z     admin-network-policy-extended-egress-rules.go:126: Expected connection to succeed from network-policy-conformance-gryffindor/harry-potter-1 to 172.18.0.3, but instead it miserably failed. stderr: REFUSED: read udp 10.244.2.215:48223->172.18.0.3:5353: read: connection refused
2024-04-18T18:48:02.8865685Z     admin-network-policy-extended-egress-rules.go:128: 
2024-04-18T18:48:02.8867347Z         	Error Trace:	/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/conformance/tests/admin-network-policy-extended-egress-rules.go:128
2024-04-18T18:48:02.8868283Z         	Error:      	Should be true

it is still using the old 5353 port and my PR here: kubernetes-sigs/network-policy-api#217 already changed all this. https://github.com/kubernetes-sigs/network-policy-api/releases/tag/v0.1.3 we even re-tagged this. I don't know why our github runners are not picking this up correctly..note that its a connection refused so clearly the install part of daemon sets look good from the manifest; its the test changes that are not getting picked up

@tssurya
Copy link
Member Author

tssurya commented Apr 19, 2024

maybe its golang/go#42312 not sure.. but clearly the test change from last 2 commits are not getting picked up cc @astoycos maybe you have seen this before?

@tssurya tssurya force-pushed the anp-cidr-selectors branch 2 times, most recently from e1b717c to fce732c Compare April 19, 2024 21:13
@tssurya
Copy link
Member Author

tssurya commented Apr 19, 2024

All right, we had to bump the version to get the test changes since re-tagging doesn't help with go modules. So that's the last new commit coming in.

Signed-off-by: Surya Seetharaman <[email protected]>
OVN Address-Sets can be a combination of IPs, CIDRs, Ethernet
addresses etc as specified in their documentation:
" An address set may contain Ethernet, IPv4, or IPv6 addresses with
 optional bitwise or CIDR masks."

_uuid               : 642413a7-21c6-42a5-a817-db9d31455716
addresses           : ["10.244.0.3", "10.244.0.4", "10.244.0.5", "10.244.1.3", "10.244.2.3", "10.244.3.0/24", "172.19.0.3", "172.19.0.4", "172.19.0.5"]
external_ids        : {direction=Egress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:node-as-egress-peers:Egress:1:v4", "k
8s.ovn.org/name"=node-as-egress-peers, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
name                : a2074424610294966262

However our ovn pkg address-sets were all IP centric using
net.IP everywhere. This refactor will help in future commits
where we plan to add CIDR peers as well.

NOTE: Reviweres please take care to ensure I haven't removed
some ParseCIDR/ParseIP validations which could cause some
regressions or issues.

Updated mockery to 2.40 version and ran mockery --all in
the address-set package.

Signed-off-by: Surya Seetharaman <[email protected]>
Signed-off-by: Surya Seetharaman <[email protected]>
Signed-off-by: Surya Seetharaman <[email protected]>
Signed-off-by: Surya Seetharaman <[email protected]>
Signed-off-by: Surya Seetharaman <[email protected]>
@tssurya
Copy link
Member Author

tssurya commented Apr 22, 2024

@jcaamano : PTAL I have rebased on master and fixed e2es and I think I addressed your latest comments

@igsilya
Copy link
Contributor

igsilya commented Apr 22, 2024

meanwhile @dceara / @igsilya / @almusil are any of you aware of this ^ or has performed ovn heater tests for address-sets with CIDR blocks in it? Does it have any perf impact to put a CIDR range into an addressset?

We don't have relevant ovn-heater tests yet, unfortunately.

As a general rule, if the address set has to be modified during the expression processing, that will make incremental processing impossible on the ovn-controller side.

In simple rules that only contain an address set match on a single field the address set is pretty much taken verbatim, unless it contains exact duplicates.

For example, if we have an address set set1 that contains 10.0.0.1, 10.0.0.2, 10.0.0.3 and 10.0.0.0/24, and we have an ACL ip4 && ip4.dst == $set1, we'll get 4 OpenFlow rules from it, despite the fact that the set contains overlapping CIDRs.
In this case ovn-controller sacrifices an ability to optimize the set in favor of being able to process address set changes incrementally.
The result will be 4 rules:

ip,nw_dst=10.0.0.0/24
ip,nw_dst=10.0.0.1
ip,nw_dst=10.0.0.2
ip,nw_dst=10.0.0.3

In this case there is no difference for ovn-controller if the address set contains simple IPs or CIDRs.

so something like:

1. ip4.dst == $address-set-ips+cidr-blocks versus

2. ip4.dst == $address-set-ips || ip4.dst == CIDRBlock1 || ip4.dst == CIDRBlock2

are the two options we have in ovnk today for doing ACL matches, is there a one is better than the other approach ^ we need to consider?

I think option 1, one large address set, will work better. We have this bug:

https://issues.redhat.com/browse/FDP-509

and that's partially addressed by @almusil in https://patchwork.ozlabs.org/project/ovn/patch/[email protected]/ with the limitation that:

This unfortunately doesn't help with the following flows:
"ip4.src == $as1 && ip4.dst == $as2"
"ip4.src == $as1 || ip4.dst == $as2"

As a general rule of thumb for ACLs - don't use complex expressions. More complex the expression, more likely it for ovn-controller to generate unoptimal OpenFlow rule set. See https://bugzilla.redhat.com/show_bug.cgi?id=2180052 for example.

And another warning here is that expressions that contain overlapping IPs/CIDRs should be handled very carefully. As shown above, ovn-controller will not try to optimize an address set if it is the only expression for this particular field in the match. However, once you add at least one more condition for the same field, the address set may be optimized and ovn-controller will loose ability to process changes incrementally for this ACL.

For example, taking the set1 from the example above. We saw that basic ip4 && ip4.dst == $set1 match produces 4 separate OpenFlow rules. However, ip4 && (ip4.dst == $set1 || ip4.dst == 192.168.0.1) will make ovn-controller to process content of the set and it will be optimized down to a single OpenFlow rule ip,nw_dst=10.0.0.0/24 leaving us with:

ip,nw_dst=10.0.0.0/24
ip,nw_dst=192.168.0.1

After this operation incremental processing of the address set changes is no longer possible. If the set didn't have the CIDR, but we add || ip4.dst == 10.0.0.0/24, the set will be optimized out anyway. The same is true for ip4.dst == {$set1, CIDR} and ip4.dst == {$set1, $set2} variants of the ACL. It doesn't matter how it is written, if there are overlapping CIDRs, or plain duplicates, on the same field (ip4.dst in the example), incremental processing on the side of the ovn-controller will not be possible.

As I guess that there will be overlap between NP and ANP CIDRs, I'd say that option 2 will likely just break incremental processing. Option 1 is definitely much better in that regard.

General advise from conclusions of https://www.openvswitch.org/support/ovscon2023/slides/OVN_expression_parsing_Fighting_inequality.pdf to use priorities and try to split large ACLs into smaller ones applies here.

I'd say that safer option may be to even split the ip4.dst == $address-set-ips+cidr-blocks into two separate ACLs with two separate address sets. But a single address set should work fine as long as you're not adding any other conditions on ip4.dst field.

@tssurya
Copy link
Member Author

tssurya commented Apr 22, 2024

unrelated failures:
https://github.com/ovn-org/ovn-kubernetes/actions/runs/8781279809/job/24094058452?pr=4235

Summarizing 1 Failure:
  [FAIL] e2e egress IP validation [OVN network] Using different methods to disable a node's availability for egress Should validate the egress IP functionality against remote hosts [It] disabling egress nodes impeding GRCP health check
  /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/egressip.go:419

Ran 94 of 265 Specs in 3341.945 seconds

which is a known flake: #4144

https://github.com/ovn-org/ovn-kubernetes/actions/runs/8781279809/job/24094055891?pr=4235 is a known flake I have double checked that its not a bug in code per say, I might need to add retries OR similar to all our other e2e's run these tests twice by default.

@tssurya
Copy link
Member Author

tssurya commented Apr 22, 2024

meanwhile @dceara / @igsilya / @almusil are any of you aware of this ^ or has performed ovn heater tests for address-sets with CIDR blocks in it? Does it have any perf impact to put a CIDR range into an addressset?

We don't have relevant ovn-heater tests yet, unfortunately.

As a general rule, if the address set has to be modified during the expression processing, that will make incremental processing impossible on the ovn-controller side.

In simple rules that only contain an address set match on a single field the address set is pretty much taken verbatim, unless it contains exact duplicates.

For example, if we have an address set set1 that contains 10.0.0.1, 10.0.0.2, 10.0.0.3 and 10.0.0.0/24, and we have an ACL ip4 && ip4.dst == $set1, we'll get 4 OpenFlow rules from it, despite the fact that the set contains overlapping CIDRs. In this case ovn-controller sacrifices an ability to optimize the set in favor of being able to process address set changes incrementally. The result will be 4 rules:

ip,nw_dst=10.0.0.0/24
ip,nw_dst=10.0.0.1
ip,nw_dst=10.0.0.2
ip,nw_dst=10.0.0.3

thanks ilya for the insights here, overlapping cidrs/ips in the same address set logically speaking something not possible with this feature as in I can't think of a reason as to why peers will overlap from user stand point of view in the same address-set so we should be good there.

In this case there is no difference for ovn-controller if the address set contains simple IPs or CIDRs.

so something like:

1. ip4.dst == $address-set-ips+cidr-blocks versus

2. ip4.dst == $address-set-ips || ip4.dst == CIDRBlock1 || ip4.dst == CIDRBlock2

are the two options we have in ovnk today for doing ACL matches, is there a one is better than the other approach ^ we need to consider?

I think option 1, one large address set, will work better. We have this bug:
https://issues.redhat.com/browse/FDP-509
and that's partially addressed by @almusil in https://patchwork.ozlabs.org/project/ovn/patch/[email protected]/ with the limitation that:

This unfortunately doesn't help with the following flows:
"ip4.src == $as1 && ip4.dst == $as2"
"ip4.src == $as1 || ip4.dst == $as2"

As a general rule of thumb for ACLs - don't use complex expressions. More complex the expression, more likely it for ovn-controller to generate unoptimal OpenFlow rule set. See https://bugzilla.redhat.com/show_bug.cgi?id=2180052 for example.

And another warning here is that expressions that contain overlapping IPs/CIDRs should be handled very carefully. As shown above, ovn-controller will not try to optimize an address set if it is the only expression for this particular field in the match. However, once you add at least one more condition for the same field, the address set may be optimized and ovn-controller will loose ability to process changes incrementally for this ACL.

For example, taking the set1 from the example above. We saw that basic ip4 && ip4.dst == $set1 match produces 4 separate OpenFlow rules. However, ip4 && (ip4.dst == $set1 || ip4.dst == 192.168.0.1) will make ovn-controller to process content of the set and it will be optimized down to a single OpenFlow rule ip,nw_dst=10.0.0.0/24 leaving us with:

ip,nw_dst=10.0.0.0/24
ip,nw_dst=192.168.0.1

After this operation incremental processing of the address set changes is no longer possible. If the set didn't have the CIDR, but we add || ip4.dst == 10.0.0.0/24, the set will be optimized out anyway. The same is true for ip4.dst == {$set1, CIDR} and ip4.dst == {$set1, $set2} variants of the ACL. It doesn't matter how it is written, if there are overlapping CIDRs, or plain duplicates, on the same field (ip4.dst in the example), incremental processing on the side of the ovn-controller will not be possible.

As I guess that there will be overlap between NP and ANP CIDRs, I'd say that option 2 will likely just break incremental processing. Option 1 is definitely much better in that regard.

yeah going with option1 for today and we have plans to try out some scale runs with potentially ways users might use this feature, will share some database excerpts from that with ovn team in the future to seek continuous improvements if any.

func (as *ovnAddressSet) deleteIPs(ips []net.IP) ([]ovsdb.Operation, error) {
if len(ips) == 0 {
// deleteAddresses removes selected addresses from the existing address_set
func (as *ovnAddressSet) deleteAddresses(addresses []string) ([]ovsdb.Operation, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A quick question: how does this work if we call addAddresses with the same IP multiple times and then deleteAddresses once for the same IP? My understanding is that the address will be removed from the database in the end. How does that work if the same address comes from multiple sources? Or is it not possible to get the same IP from multiple places (e.g. once from ANP and once from a CIDR block) ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I ask is primarily that the code is written as if it expects potentially non-unique addresses. That might be a bit misleading for a reader. And might be dangerous if they are in fact non-unique.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we filter them out - for a given address-set we do not pass in duplicate ips, we ensure they are unique before we send the transact (both during add and delete)

for most of the features address-sets are tied to namespaces and podIPs are unique we are not going to be adding multiple duplicated ips into the address-set where they hold different meanings. So if you delete them from one spot its meant to go away from the database which is the expected behaviour.

Copy link
Contributor

@igsilya igsilya Apr 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack. I just had an impression from discussions on this PR that you're going to mix IPs from different sources in the same address set and that made me think about possible duplicates.
But if that is not the case and user always sets unique values for the list of CIDRs, that should be fine.

Copy link
Member Author

@tssurya tssurya Apr 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, what??? did I edit your comment? I thought I quote replied.. dangit => wait how can I edit your comments... that seems wrong..
sorry @igsilya ! I am unsure how to fix that..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack. I just had an impression from discussions on this PR that you're going to mix IPs from different sources in the same address set and that made me think about possible duplicates.

the PR changes an addressset PR from being "ONLY contains IPs" to "contains both IPs and CIDRs" rest of the logic is not changing

But if that is not the case and user always sets unique values for the list of CIDRs, that should be fine.

there is API validation in place to ensure CIDRs for same peer contain unique elements that is a set of cidrs; in addition to that we do the unique filtering on ovnk side before we add transact

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine. :) The repo owners / org members are able to edit comments from others on github, IIRC. The history is seen in the 'edited by ...' dropdown.

Anyways, thanks for explanation!

@jcaamano jcaamano merged commit 48abfcf into ovn-org:master Apr 22, 2024
39 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/admin-network-policy kind/feature All issues/PRs that are new features
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Implement Admin Network Policy API in OVNK
5 participants