Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for Hierarchical ACLs #3645

Merged
merged 2 commits into from
Jul 13, 2023
Merged

Conversation

tssurya
Copy link
Member

@tssurya tssurya commented Jun 5, 2023

- What this PR does and why is it needed
Change all existing ACLs to tier2

We have a new feature called Hierarchical
ACLs that is introduced in OVN to enable
support for tiered ACLs. This commit
ensures all existing ACLs for all features
are migrated towards tier2. By default all
new ACLs must be added to tier2.

Signed-off-by: Surya Seetharaman <[email protected]>

- Special notes for reviewers
Tiered ACLs will be used for ANP & BANP in #3659. This PR splits the initial framework addition so that its easier to review this.

  • Upgrades should be automatically handled, ensure the new ACLs are created with the tier field set to 2 as clusters upgrade. Added unit tests to verify this.
  • EDIT: Even if ACLs will be automatically updated during upgrades, we went with the option of controlled update of ACLs sorted by their priority (lowest first) to avoid network disruption. Credit to @jcaamano!

- How to verify it

- Description for the changelog
Move all existing ACLs to tier2 which will be the default

@tssurya tssurya changed the title Anp prework Add Support for Hierarchical ACLs Jun 5, 2023
@tssurya
Copy link
Member Author

tssurya commented Jun 6, 2023

@jordigilh : e2e's are failing:

2023-06-05T23:10:28.5894098Z 
2023-06-05T23:10:28.5894740Z �[91m�[1mSummarizing 3 Failures:�[0m
2023-06-05T23:10:28.5894979Z 
2023-06-05T23:10:28.5897212Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway stale conntrack entry deletion validation �[0m�[0mDynamic Hop: Should validate conntrack entry deletion for TCP/UDP traffic via multiple external gateways a.k.a ECMP routes �[0m�[91m�[1m[It] IPV4 udp �[0m
2023-06-05T23:10:28.5898832Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1500�[0m
2023-06-05T23:10:28.5899375Z 
2023-06-05T23:10:28.5900414Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway validation �[0m�[91m�[1m[BeforeEach] Should validate TCP/UDP connectivity to multiple external gateways for a UDP / TCP scenario �[0m�[90mIPV6 udp �[0m
2023-06-05T23:10:28.5901771Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1207�[0m
2023-06-05T23:10:28.5902247Z 
2023-06-05T23:10:28.5903487Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway validation �[0m�[91m�[1m[BeforeEach] Should validate TCP/UDP connectivity to multiple external gateways for a UDP / TCP scenario �[0m�[90mIPV6 udp �[0m
2023-06-05T23:10:28.5904813Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1207�[0m
2023-06-05T23:10:28.5905130Z 
2023-06-05T23:10:28.5905418Z �[1m�[91mRan 127 of 223 Specs in 7449.727 seconds�[0m
2023-06-05T23:10:28.5906074Z �[1m�[91mFAIL!�[0m -- �[32m�[1m126 Passed�[0m | �[91m�[1m1 Failed�[0m | �[33m�[1m1 Flaked�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m96 Skipped�[0m
2023-06-05T23:10:28.5909378Z 
2023-06-05T23:10:28.5910306Z �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m
2023-06-05T23:10:28.5911071Z �[38;5;228m=============================================�[0m

PTAL because presubmits on PRs are effected because of this..

@jordigilh
Copy link
Contributor

@jordigilh : e2e's are failing:

2023-06-05T23:10:28.5894098Z 
2023-06-05T23:10:28.5894740Z �[91m�[1mSummarizing 3 Failures:�[0m
2023-06-05T23:10:28.5894979Z 
2023-06-05T23:10:28.5897212Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway stale conntrack entry deletion validation �[0m�[0mDynamic Hop: Should validate conntrack entry deletion for TCP/UDP traffic via multiple external gateways a.k.a ECMP routes �[0m�[91m�[1m[It] IPV4 udp �[0m
2023-06-05T23:10:28.5898832Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1500�[0m
2023-06-05T23:10:28.5899375Z 
2023-06-05T23:10:28.5900414Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway validation �[0m�[91m�[1m[BeforeEach] Should validate TCP/UDP connectivity to multiple external gateways for a UDP / TCP scenario �[0m�[90mIPV6 udp �[0m
2023-06-05T23:10:28.5901771Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1207�[0m
2023-06-05T23:10:28.5902247Z 
2023-06-05T23:10:28.5903487Z �[91m�[1m[Fail] �[0m�[90mExternal Gateway test suite �[0m�[0mWith Admin Policy Based External Route CRs �[0m�[90me2e multiple external gateway validation �[0m�[91m�[1m[BeforeEach] Should validate TCP/UDP connectivity to multiple external gateways for a UDP / TCP scenario �[0m�[90mIPV6 udp �[0m
2023-06-05T23:10:28.5904813Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/external_gateways.go:1207�[0m
2023-06-05T23:10:28.5905130Z 
2023-06-05T23:10:28.5905418Z �[1m�[91mRan 127 of 223 Specs in 7449.727 seconds�[0m
2023-06-05T23:10:28.5906074Z �[1m�[91mFAIL!�[0m -- �[32m�[1m126 Passed�[0m | �[91m�[1m1 Failed�[0m | �[33m�[1m1 Flaked�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m96 Skipped�[0m
2023-06-05T23:10:28.5909378Z 
2023-06-05T23:10:28.5910306Z �[38;5;228mYou're using deprecated Ginkgo functionality:�[0m
2023-06-05T23:10:28.5911071Z �[38;5;228m=============================================�[0m

PTAL because presubmits on PRs are effected because of this..

Opened #3649 to fix some of the failures, but the first build failed on a unit test. Is this something that rings any bells to you? It has nothing to do with the code I removed from the e2e tests:

Informer Event Handler Tests
/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/informer/informer_test.go:54
  adds existing pod and processes an update event [It]
  /home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/informer/informer_test.go:202

  Timed out after 1.000s.
  adds
  Expected
      <int32>: 1
  to equal
      <int32>: 2

  /home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/informer/informer_test.go:278

I will keep retesting though.

@tssurya
Copy link
Member Author

tssurya commented Jun 6, 2023

@jordigilh : yes you may ignore the handler events flake, its not related to your changes.
thanks for taking a look at the ABP flakes!

@tssurya tssurya force-pushed the anp-prework branch 4 times, most recently from 1bb7c9c to bea7ee9 Compare June 6, 2023 13:45
@npinaeva
Copy link
Member

npinaeva commented Jun 6, 2023

I would say external_ids_syncer package should have zero value tier instead of the default one, because it is supposed to update old-version acls (that don't have any tier), same as any stale ACL update test.
Also, what is the procedure for adding a new field from the db point view? Since the Tier field is not optional, will the db itself assign some zero value to all existing acls on upgrade?

@tssurya
Copy link
Member Author

tssurya commented Jun 6, 2023

I would say external_ids_syncer package should have zero value tier instead of the default one, because it is supposed to update old-version acls (that don't have any tier), same as any stale ACL update test.

hmm you mean they don't exist anymore on our current versions? i don't want to keep anything at 0 tier since those will be first to be evaluated. So if you are sure these are stale ones that won't exist in our current version of the cluster where I am adding this feature then i can keep it at 0, else 2 would be the safest to go with...

Also, what is the procedure for adding a new field from the db point view? Since the Tier field is not optional, will the db itself assign some zero value to all existing acls on upgrade?

yes by default it will be set to 0 on upgrades...(mentioned it in commit 3's message)

@flavio-fernandes
Copy link
Contributor

The changes look fine. I wanted to see what the ACLs in the NBDB look like before applying this PR and noticed that the rows already contain the tier column:

[master, using kind cluster]
❯  POD=$(kubectl get pod -n ovn-kubernetes -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep ovnkube-db- | head -1) && kubectl exec -ti $POD -n ovn-kubernetes -c nb-ovsdb -- bash || { echo "Failed: $?" }


[root@ovn-control-plane ~]# ovn-nbctl list acl
...
_uuid               : bdf64cb2-c6ab-49d2-9649-1859d31c3a5e
action              : allow-related
direction           : to-lport
external_ids        : {ip="10.244.1.2", "k8s.ovn.org/id"="default-network-controller:NetpolNode:ovn-worker2:10.244.1.2", "k8s.ovn.org/name"=ovn-worker2, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetpolNode}
label               : 0
log                 : false
match               : "ip4.src==10.244.1.2"
meter               : acl-logging
name                : []
options             : {}
priority            : 1001
severity            : []
tier                : 0     <-----

then, I created a simple netpol to see get more acl entries

[root@ovn-control-plane ~]# ovn-nbctl list acl
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ov
n.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 0
[root@ovn-control-plane ~]# ovn-nbctl --format csv --column name,tier list acl
name,tier
"""NP:default:allow-egress-external:Egress:0""",0
[],0
[],0
[],0
"""NP:default:Egress""",0
[],0
[],0
"""NP:default:Ingress""",0
"""NP:default:Egress""",0
"""NP:default:Ingress""",0

Then, I built the code with this PR changes and restarted the master pods

❯ git co e2b7bd4287ed29d0cc3b0a02d50326768f07fc18
❯ make -C go-controller build && \
cp -v ./go-controller/_output/go/bin/ovnkube ./dist/images/ovnkube && \
docker build -t localhost/ovn-daemonset-f:dev -f ./dist/images/Dockerfile.fedora dist/images/ && \
kind load docker-image localhost/ovn-daemonset-f:dev --name ovn && \
kubectl delete pod -l "name=ovnkube-master" -n ovn-kubernetes && \
echo ok

And confirmed all ACLs are tier 2 now:

[root@ovn-control-plane ~]# ovn-nbctl list acl bdf64cb2-c6ab-49d2-9649-1859d31c3a5e
_uuid               : bdf64cb2-c6ab-49d2-9649-1859d31c3a5e
action              : allow-related
direction           : to-lport
external_ids        : {ip="10.244.1.2", "k8s.ovn.org/id"="default-network-controller:NetpolNode:ovn-worker2:10.244.1.2", "k8s.ovn.org/name"=ovn-worker2, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetpolNode}
label               : 0
log                 : false
match               : "ip4.src==10.244.1.2"
meter               : acl-logging
name                : []
options             : {}
priority            : 1001
severity            : []
tier                : 2

[root@ovn-control-plane ~]# ovn-nbctl list acl 5d10edf0-d433-4daa-93c7-9d9406283953
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 2


[root@ovn-control-plane ~]# ovn-nbctl --format csv --column name,tier list acl
name,tier
"""NP:default:allow-egress-external:Egress:0""",2
[],2
[],2
[],2
"""NP:default:Egress""",2
[],2
[],2
"""NP:default:Ingress""",2
"""NP:default:Egress""",2
"""NP:default:Ingress""",2

@flavio-fernandes
Copy link
Contributor

flavio-fernandes commented Jun 7, 2023

BUT then, I manually set the tier back to 0 on a specific ACL and restarted the db:

[root@ovn-control-plane ~]# ovn-nbctl set acl 5d10edf0-d433-4daa-93c7-9d9406283953 tier=0
[root@ovn-control-plane ~]# ovn-nbctl list acl 5d10edf0-d433-4daa-93c7-9d9406283953
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 0

Restarted master:

❯ kubectl delete pod -l "name=ovnkube-master" -n ovn-kubernetes
pod "ovnkube-master-7db7c7d5f8-rb6pg" deleted

back in the db shell:

[root@ovn-control-plane ~]#
[root@ovn-control-plane ~]# ovn-nbctl list acl 5d10edf0-d433-4daa-93c7-9d9406283953
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 2

Which means... we are running the upgrade path every time? Should the tier be left alone once we had it set to 2 for the very first time?

I0607 20:29:35.434209      57 cache.go:1036] cache "msg"="creating row" "database"="OVN_Northbound" 
"model"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc0007aecd0 Name:0xc0007aece0 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="5d10edf0-d433-4daa-93c7-9d9406283953"
...
I0607 20:29:35.583585      57 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" 
"new"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc000f02070 Name:0xc000f02080 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:2}" 

"old"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc000f02090 Name:0xc000f020a0 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="5d10edf0-d433-4daa-93c7-9d9406283953"

go-controller/pkg/ovn/policy_test.go Outdated Show resolved Hide resolved
go-controller/pkg/ovn/policy_test.go Outdated Show resolved Hide resolved
go-controller/pkg/ovn/policy_test.go Outdated Show resolved Hide resolved
@tssurya
Copy link
Member Author

tssurya commented Jun 8, 2023

BUT then, I manually set the tier back to 0 on a specific ACL and restarted the db:

[root@ovn-control-plane ~]# ovn-nbctl set acl 5d10edf0-d433-4daa-93c7-9d9406283953 tier=0
[root@ovn-control-plane ~]# ovn-nbctl list acl 5d10edf0-d433-4daa-93c7-9d9406283953
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 0

Restarted master:

❯ kubectl delete pod -l "name=ovnkube-master" -n ovn-kubernetes
pod "ovnkube-master-7db7c7d5f8-rb6pg" deleted

back in the db shell:

[root@ovn-control-plane ~]#
[root@ovn-control-plane ~]# ovn-nbctl list acl 5d10edf0-d433-4daa-93c7-9d9406283953
_uuid               : 5d10edf0-d433-4daa-93c7-9d9406283953
action              : allow-related
direction           : from-lport
external_ids        : {direction=Egress, gress-index="0", ip-block-index="0", "k8s.ovn.org/id"="default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0", "k8s.ovn.org/name"="default:allow-egress-external", "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=NetworkPolicy, port-policy-index="-1"}
label               : 0
log                 : false
match               : "ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592"
meter               : acl-logging
name                : "NP:default:allow-egress-external:Egress:0"
options             : {apply-after-lb="true"}
priority            : 1001
severity            : []
tier                : 2

Which means... we are running the upgrade path every time? Should the tier be left alone once we had it set to 2 for the very first time?

no I think that is correct, upon startup if we find any of the ACLs in tier0 we should update them to tier2, no ACL other than the ones I'll add for ANP should be in non-2 tiers. So if you move it back to 0 and restart then yes we should upgrade all ACLs again. However if all ACLs are already at 2, then libovsdb cache will realised there is nothing to mutate so it won't update anything which is already at tier2 again.

I0607 20:29:35.434209      57 cache.go:1036] cache "msg"="creating row" "database"="OVN_Northbound" 
"model"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc0007aecd0 Name:0xc0007aece0 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="5d10edf0-d433-4daa-93c7-9d9406283953"
...
I0607 20:29:35.583585      57 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" 
"new"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc000f02070 Name:0xc000f02080 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:2}" 

"old"="&{UUID:5d10edf0-d433-4daa-93c7-9d9406283953 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress gress-index:0 ip-block-index:0 k8s.ovn.org/id:default-network-controller:NetworkPolicy:default:allow-egress-external:Egress:0:-1:0 k8s.ovn.org/name:default:allow-egress-external k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.dst == 1.2.3.0/24 && inport == @a10173564782083736592 Meter:0xc000f02090 Name:0xc000f020a0 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="5d10edf0-d433-4daa-93c7-9d9406283953"

If you find that ACLs which were already in Tier2 are getting updated again - which I hope is not the case, then there is some bug in libovsdb side

TYSM @flavio-fernandes for doing a thorough review!

@flavio-fernandes
Copy link
Contributor

flavio-fernandes commented Jun 8, 2023

no I think that is correct, upon startup if we find any of the ACLs in tier0 we should update them to tier2, no ACL other than the ones I'll add for ANP should be in non-2 tiers.

yes, you are right. K8 mandates what it should be and I realized late last night that indeed fixing it back to 2 is the right implementation.

@flavio-fernandes
Copy link
Contributor

/lgtm

@tssurya tssurya mentioned this pull request Jun 9, 2023
34 tasks
@npinaeva
Copy link
Member

hmm you mean they don't exist anymore on our current versions? i don't want to keep anything at 0 tier since those will be first to be evaluated. So if you are sure these are stale ones that won't exist in our current version of the cluster where I am adding this feature then i can keep it at 0, else 2 would be the safest to go with...

external_ids_syncer will be executed on upgrade before ACLs are updated to tier 2, and that package doesn't really care about the tier, that is why I think it should use zero value
Stale ACLs tests are usually testing all the possible changes between versions, so if you use tier 0 in those, you don't need a separate test for tier update, and you make sure that tier 0 doesn't break any other syncs for acls.

yes by default it will be set to 0 on upgrades...(mentioned it in commit 3's message)

It may be reasonable to squash commits 2 and 3, because it is about the same change, and seeing sync path and updated tests and functions in the same commit may be less confusing, wdyt?

@tssurya
Copy link
Member Author

tssurya commented Jun 14, 2023

thanks for reviewing @npinaeva !

hmm you mean they don't exist anymore on our current versions? i don't want to keep anything at 0 tier since those will be first to be evaluated. So if you are sure these are stale ones that won't exist in our current version of the cluster where I am adding this feature then i can keep it at 0, else 2 would be the safest to go with...

external_ids_syncer will be executed on upgrade before ACLs are updated to tier 2, and that package doesn't really care about the tier, that is why I think it should use zero value

that sounds to me like it really doesn't matter whether its 0 or 2? Let's keep it 2 so that its less confusing - to be on the safe side if there are bugs in that code and something is still left over etc.. who knows?! better to keep them at 2 worst case? The only reason I am hesitant to leave them at 0 is that if in some cluster there are really left overs because that code didn't run for whatever insane reason in the skew upgrades they will be left at tier0 and i don't want any of them outside of ANP at that tier. If all is well and we are at an ideal place its a no-op right?

It is possible I am being paranoid - but just to get us both on the same page - the way I have done it now - are there any functional concerns?

yes by default it will be set to 0 on upgrades...(mentioned it in commit 3's message)

It may be reasonable to squash commits 2 and 3, because it is about the same change, and seeing sync path and updated tests and functions in the same commit may be less confusing, wdyt?

I don't have any preference over process related things like how commits are organized. I will squash them together if that makes it less confusing for you. Happy to oblige.

@tssurya
Copy link
Member Author

tssurya commented Jun 14, 2023

@npinaeva : squashed the commits together, hope that works for you.

@tssurya
Copy link
Member Author

tssurya commented Jun 14, 2023

Unrelated failure:

2023-06-14T16:22:09.9008603Z 
2023-06-14T16:22:09.9010909Z �[91m�[1mSummarizing 1 Failure:�[0m
2023-06-14T16:22:09.9011303Z 
2023-06-14T16:22:09.9014943Z �[91m�[1m[Fail] �[0m�[90mHybrid Overlay Node Linux Operations �[0m�[91m�[1m[It] sets up tunnels for Windows nodes �[0m
2023-06-14T16:22:09.9017019Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/hybrid-overlay/pkg/controller/node_linux_test.go:680�[0m
2023-06-14T16:22:09.9017491Z 
2023-06-14T16:22:09.9019291Z �[1m�[91mRan 8 of 8 Specs in 4.901 seconds�[0m
2023-06-14T16:22:09.9020938Z �[1m�[91mFAIL!�[0m -- �[32m�[1m7 Passed�[0m | �[91m�[1m1 Failed�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m0 Skipped�[0m
2023-06-14T16:22:09.9022165Z --- FAIL: TestHybridOverlayControllerSuite (4.91s)

https://github.com/ovn-org/ovn-kubernetes/actions/runs/5269310815/jobs/9527263051?pr=3645

@tssurya
Copy link
Member Author

tssurya commented Jun 14, 2023

/retest-failed

@ovn-robot
Copy link
Collaborator

Oops, something went wrong:

Must have admin rights to Repository.

@tssurya
Copy link
Member Author

tssurya commented Jun 29, 2023

i was running the tests locally:

OVN External Gateway policy when updating a policy 
  validates that changing the from selector will retarget the new namespaces
  /go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:403
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="6ac7b046-6211-4cf0-a0e7-d49c4d1b8113" "table"="Database"
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1048} "level"=5 "msg"="inserting row" "uuid"="6ac7b046-6211-4cf0-a0e7-d49c4d1b8113" "table"="Database" "model"="&{UUID:6ac7b046-6211-4cf0-a0e7-d49c4d1b8113 Cid:<nil> Connected:true Index:<nil> Leader:true Model:clustered Name:OVN_Northbound Schema:<nil> Sid:0xc00480afb0}"
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="1125ff36-cee6-4d4e-92f6-c765d4e4876b" "table"="Logical_Switch"
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1048} "level"=5 "msg"="inserting row" "uuid"="1125ff36-cee6-4d4e-92f6-c765d4e4876b" "table"="Logical_Switch" "model"="&{UUID:1125ff36-cee6-4d4e-92f6-c765d4e4876b ACLs:[] Copp:<nil> DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:node1 OtherConfig:map[] Ports:[] QOSRules:[]}"
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="2477f21a-6083-41a7-b545-24c3b3489c90" "table"="Database"
2023/06/29 11:48:42 cache: "caller"={"file":"cache.go","line":1048} "level"=5 "msg"="inserting row" "uuid"="2477f21a-6083-41a7-b545-24c3b3489c90" "table"="Database" "model"="&{UUID:2477f21a-6083-41a7-b545-24c3b3489c90 Cid:<nil> Connected:true Index:<nil> Leader:true Model:clustered Name:OVN_Southbound Schema:<nil> Sid:0xc005516000}"
2023/06/29 11:48:43 server/transaction/cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global"
2023/06/29 11:48:43 server/transaction/cache: "caller"={"file":"cache.go","line":1048} "level"=5 "msg"="inserting row" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global" "model"="&{UUID:93c7636d-bf7a-411c-924a-589aa21c3022 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name:global NbCfg:0 NbCfgTimestamp:0 Options:map[] SbCfg:0 SbCfgTimestamp:0 SSL:<nil>}"
2023/06/29 11:48:43 cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global"
2023/06/29 11:48:43 cache: "caller"={"file":"cache.go","line":1048} "level"=5 "msg"="inserting row" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global" "model"="&{UUID:93c7636d-bf7a-411c-924a-589aa21c3022 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name:global NbCfg:0 NbCfgTimestamp:0 Options:map[] SbCfg:0 SbCfgTimestamp:0 SSL:<nil>}"
2023/06/29 11:48:43 server/transaction/cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global"
2023/06/29 11:48:43 server/transaction/cache: "caller"={"file":"cache.go","line":1083} "level"=5 "msg"="deleting row" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global" "model"="&{UUID:93c7636d-bf7a-411c-924a-589aa21c3022 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name:global NbCfg:0 NbCfgTimestamp:0 Options:map[] SbCfg:0 SbCfgTimestamp:0 SSL:<nil>}"
2023/06/29 11:48:43 cache: "caller"={"file":"cache.go","line":1028} "level"=5 "msg"="processing update" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global"
2023/06/29 11:48:43 cache: "caller"={"file":"cache.go","line":1083} "level"=5 "msg"="deleting row" "uuid"="93c7636d-bf7a-411c-924a-589aa21c3022" "table"="NB_Global" "model"="&{UUID:93c7636d-bf7a-411c-924a-589aa21c3022 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name:global NbCfg:0 NbCfgTimestamp:0 Options:map[] SbCfg:0 SbCfgTimestamp:0 SSL:<nil>}"
I0629 11:48:43.108086   16421 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes
I0629 11:48:43.108141   16421 shared_informer.go:311] Waiting for caches to sync for apbexternalroutenamespaces
I0629 11:48:43.108154   16421 shared_informer.go:311] Waiting for caches to sync for apbexternalroutepods
I0629 11:48:43.108218   16421 shared_informer.go:318] Caches are synced for apbexternalroutenamespaces
I0629 11:48:43.108233   16421 shared_informer.go:318] Caches are synced for apbexternalroutepods
I0629 11:48:43.208841   16421 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes
I0629 11:48:43.211308   16421 external_controller_policy.go:84] Added Admin Policy Based External Route dynamic
I0629 11:48:43.214024   16421 external_controller_policy_test.go:423] TROZET BEFORE UPDATE

• Failure [5.772 seconds]
OVN External Gateway policy
/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:126
  when updating a policy
  /go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:401
    validates that changing the from selector will retarget the new namespaces [It]
    /go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:403

    Timed out after 5.000s.
      (*apbroute.namespaceInfo)(
    - 	&{
    - 		Policies:        sets.Set[string]{"dynamic": {}},
    - 		StaticGateways:  s"",
    - 		DynamicGateways: map[types.NamespacedName]*apbroute.gatewayInfo{s"default/pod_1": s"BFDEnabled: false, Gateways: map[192.168.10.1:{}]"},
    - 	},
    + 	nil,
      )
    

    /go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:434
Summarizing 1 Failure:

[Fail] OVN External Gateway policy when updating a policy [It] validates that changing the from selector will retarget the new namespaces 
/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn/controller/apbroute/external_controller_policy_test.go:434

Ran 51 of 51 Specs in 34.466 seconds

that one is unrelated to this PR. otherwise we are looking good.

@tssurya
Copy link
Member Author

tssurya commented Jul 10, 2023

@jcaamano raised an important point there. -> network disruption during upgrades. Since we are not updating all ACLs to tier2 at once from sync and doing it in stages, its possible we are stuck in a situation where 1000 priority default deny is in tier0 and 1001 priority allow-related is in tier2 in which case the allow won't work during upgrades. This should be fixed.

kudos for @jcaamano for thinking of this corner scenario yet very important use case!

two options:

  1. either we do controlled update of acls lower first and then higher later (@jcaamano 's idea ; credit to him) OR
  2. we update all acls at once from 0 to 2

(2) which might be a VERY large transaction in big clusters. Either ways i don't see how we can avoid have a seperate sync on startup..

@tssurya
Copy link
Member Author

tssurya commented Jul 10, 2023

2023-07-10T21:54:41.6454248Z 
2023-07-10T21:54:41.6454960Z �[91m�[1mSummarizing 1 Failure:�[0m
2023-07-10T21:54:41.6455132Z 
2023-07-10T21:54:41.6457611Z �[91m�[1m[Fail] �[0m�[90mOVN Egress Gateway Operations �[0m�[0mon setting pod gateway annotations �[0m�[90mreconciles deleting a host networked pod acting as a exgw for another namespace for existing pod �[0m�[91m�[1m[It] No BFD and with overlapping APB External Route CR should keep the annotation unchanged �[0m
2023-07-10T21:54:41.6459070Z �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/ovn/egressgw_test.go:1565�[0m
2023-07-10T21:54:41.6459353Z 

legacy egw failing unrelated

@tssurya tssurya closed this Jul 10, 2023
@tssurya tssurya reopened this Jul 10, 2023
sort.Slice(aclsInTier0, func(i, j int) bool {
return aclsInTier0[i].Priority < aclsInTier0[j].Priority
}) // O(nlogn); unstable sort
klog.Infof("SURYA %v", aclsInTier0)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: Remove these once you see logs from successful upgrade that shows the sorted order being processed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all right! tested upgrades:

I0711 07:51:35.776294      58 acl_sync.go:150] Updating Tier of existing ACLs...
I0711 07:51:35.776356      58 acl_sync.go:159] SURYA [0xc0007214d0 0xc000721560 0xc0007215f0 0xc000721680 0xc000721710 0xc000721830 0xc000721290 0xc000721440 0xc0007218c0 0xc0007217a0 0xc000721950 0xc000721320 0xc0007213b0]
I0711 07:51:35.776374      58 acl_sync.go:161] SURYA: Before sort
I0711 07:51:35.776385      58 acl_sync.go:164] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776407      58 acl_sync.go:164] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776434      58 acl_sync.go:164] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776458      58 acl_sync.go:164] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776476      58 acl_sync.go:164] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776490      58 acl_sync.go:164] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776506      58 acl_sync.go:164] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776520      58 acl_sync.go:164] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776536      58 acl_sync.go:164] SURYA map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776551      58 acl_sync.go:164] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]/10000
I0711 07:51:35.776566      58 acl_sync.go:164] SURYA map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776581      58 acl_sync.go:164] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]/9999
I0711 07:51:35.776595      58 acl_sync.go:164] SURYA map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776613      58 acl_sync.go:169] SURYA: After sort
I0711 07:51:35.776624      58 acl_sync.go:172] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776649      58 acl_sync.go:172] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776665      58 acl_sync.go:172] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776687      58 acl_sync.go:172] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776704      58 acl_sync.go:172] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776720      58 acl_sync.go:172] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776736      58 acl_sync.go:172] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776751      58 acl_sync.go:172] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776767      58 acl_sync.go:172] SURYA map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776782      58 acl_sync.go:172] SURYA map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776796      58 acl_sync.go:172] SURYA map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.776822      58 acl_sync.go:172] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]/9999
I0711 07:51:35.776838      58 acl_sync.go:172] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]/10000
I0711 07:51:35.776853      58 acl_sync.go:176] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776869      58 acl_sync.go:176] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]/1000
I0711 07:51:35.776886      58 acl_sync.go:176] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776910      58 acl_sync.go:176] SURYA map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]/1001
I0711 07:51:35.776930      58 acl_sync.go:176] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776945      58 acl_sync.go:176] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]/1001
I0711 07:51:35.776961      58 acl_sync.go:176] SURYA map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776977      58 acl_sync.go:176] SURYA map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]/1001
I0711 07:51:35.776992      58 acl_sync.go:176] SURYA map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.777007      58 acl_sync.go:176] SURYA map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.777022      58 acl_sync.go:176] SURYA map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]/1001
I0711 07:51:35.777038      58 acl_sync.go:176] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]/9999
I0711 07:51:35.777054      58 acl_sync.go:176] SURYA map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]/10000
I0711 07:51:35.777203      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:inport == @a11718373952692888238_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b32fe188-bb11-49cf-bf39-0cfc7361e3c9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.777371      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9fa36da4-8cb4-4286-a00e-ba62fa5c80d7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.777541      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a1822410377753831280} && outport == @a14627396333488653719 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {e7835318-3717-45e2-9716-ef3f8c236f51}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.777707      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a15450058810467113962} && outport == @a3548240021545986166 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {dda99940-ee99-4cf9-909c-56f4a1b2dd63}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.777847      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {28c394e2-13d8-4e66-9c8f-f2e288cd0f36}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.777997      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {3bdd27b4-90fc-41c7-ae78-0405af3e0eb3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778138      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f4ab3018-8beb-4ab6-9fd2-b655247f48ac}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778277      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:inport == @a11718373952692888238_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {884c23b8-5789-4d72-b184-28914f0bb0b2}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778418      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.2.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b1475e2d-aedb-4018-9a07-34179cb72ad9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778596      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.1.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {1a84986b-d2a3-4cad-9d3d-3121a24d9f1c}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778747      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9c36a2f5-8486-42d9-b79c-af79e78ff3d4}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.778883      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]} log:false match:(ip4.dst == 0.0.0.0/0 && ip4.dst != 10.244.0.0/16) && ip4.src == $a4322231855293774466 meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:1]} options:{GoMap:map[]} priority:9999 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {6eed1ca4-db38-41cf-a2a3-076897535a06}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.779026      58 model_client.go:372] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]} log:false match:(ip4.dst == 172.25.75.11/32) && ip4.src == $a4322231855293774466 && ((tcp && ( tcp.dst == 8888 ))) meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:0]} options:{GoMap:map[]} priority:10000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {48a8cb2c-2046-4347-87a3-356856e21536}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.779061      58 transact.go:41] Configuring OVN: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:inport == @a11718373952692888238_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b32fe188-bb11-49cf-bf39-0cfc7361e3c9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9fa36da4-8cb4-4286-a00e-ba62fa5c80d7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a1822410377753831280} && outport == @a14627396333488653719 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {e7835318-3717-45e2-9716-ef3f8c236f51}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a15450058810467113962} && outport == @a3548240021545986166 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {dda99940-ee99-4cf9-909c-56f4a1b2dd63}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {28c394e2-13d8-4e66-9c8f-f2e288cd0f36}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {3bdd27b4-90fc-41c7-ae78-0405af3e0eb3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f4ab3018-8beb-4ab6-9fd2-b655247f48ac}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:inport == @a11718373952692888238_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {884c23b8-5789-4d72-b184-28914f0bb0b2}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.2.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b1475e2d-aedb-4018-9a07-34179cb72ad9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.1.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {1a84986b-d2a3-4cad-9d3d-3121a24d9f1c}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9c36a2f5-8486-42d9-b79c-af79e78ff3d4}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]} log:false match:(ip4.dst == 0.0.0.0/0 && ip4.dst != 10.244.0.0/16) && ip4.src == $a4322231855293774466 meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:1]} options:{GoMap:map[]} priority:9999 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {6eed1ca4-db38-41cf-a2a3-076897535a06}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]} log:false match:(ip4.dst == 172.25.75.11/32) && ip4.src == $a4322231855293774466 && ((tcp && ( tcp.dst == 8888 ))) meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:0]} options:{GoMap:map[]} priority:10000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {48a8cb2c-2046-4347-87a3-356856e21536}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
I0711 07:51:35.779705      58 client.go:783]  "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:inport == @a11718373952692888238_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b32fe188-bb11-49cf-bf39-0cfc7361e3c9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9fa36da4-8cb4-4286-a00e-ba62fa5c80d7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a1822410377753831280} && outport == @a14627396333488653719 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {e7835318-3717-45e2-9716-ef3f8c236f51}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1]} log:false match:ip4.src == {$a15450058810467113962} && outport == @a3548240021545986166 meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {dda99940-ee99-4cf9-909c-56f4a1b2dd63}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {28c394e2-13d8-4e66-9c8f-f2e288cd0f36}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {3bdd27b4-90fc-41c7-ae78-0405af3e0eb3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:outport == @a11718373952692888238_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Ingress]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f4ab3018-8beb-4ab6-9fd2-b655247f48ac}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow]} log:false match:inport == @a11718373952692888238_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[NP:surya5:Egress]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {884c23b8-5789-4d72-b184-28914f0bb0b2}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.2.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b1475e2d-aedb-4018-9a07-34179cb72ad9}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.1.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {1a84986b-d2a3-4cad-9d3d-3121a24d9f1c}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.244.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {9c36a2f5-8486-42d9-b79c-af79e78ff3d4}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1]} log:false match:(ip4.dst == 0.0.0.0/0 && ip4.dst != 10.244.0.0/16) && ip4.src == $a4322231855293774466 meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:1]} options:{GoMap:map[]} priority:9999 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {6eed1ca4-db38-41cf-a2a3-076897535a06}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0]} log:false match:(ip4.dst == 172.25.75.11/32) && ip4.src == $a4322231855293774466 && ((tcp && ( tcp.dst == 8888 ))) meter:{GoSet:[acl-logging]} name:{GoSet:[EF:default:0]} options:{GoMap:map[]} priority:10000 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {48a8cb2c-2046-4347-87a3-356856e21536}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]"
I0711 07:51:35.780934      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="1a84986b-d2a3-4cad-9d3d-3121a24d9f1c"
I0711 07:51:35.780997      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:1a84986b-d2a3-4cad-9d3d-3121a24d9f1c Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.1.2 Meter:0xc00092e470 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:1a84986b-d2a3-4cad-9d3d-3121a24d9f1c Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.1.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker2:10.244.1.2 k8s.ovn.org/name:ovn-worker2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.1.2 Meter:0xc00092e480 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="1a84986b-d2a3-4cad-9d3d-3121a24d9f1c"
I0711 07:51:35.781014      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="28c394e2-13d8-4e66-9c8f-f2e288cd0f36"
I0711 07:51:35.781063      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:28c394e2-13d8-4e66-9c8f-f2e288cd0f36 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault] Label:0 Log:false Match:ip4.src == 169.254.169.5 Meter:0xc00092e7e0 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:28c394e2-13d8-4e66-9c8f-f2e288cd0f36 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault] Label:0 Log:false Match:ip4.src == 169.254.169.5 Meter:0xc00092e7f0 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="28c394e2-13d8-4e66-9c8f-f2e288cd0f36"
I0711 07:51:35.781078      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="48a8cb2c-2046-4347-87a3-356856e21536"
I0711 07:51:35.781120      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:48a8cb2c-2046-4347-87a3-356856e21536 Action:allow Direction:to-lport ExternalIDs:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0] Label:0 Log:false Match:(ip4.dst == 172.25.75.11/32) && ip4.src == $a4322231855293774466 && ((tcp && ( tcp.dst == 8888 ))) Meter:0xc00092eb50 Name:0xc00092eb60 Options:map[] Priority:10000 Severity:<nil> Tier:2}" "old"="&{UUID:48a8cb2c-2046-4347-87a3-356856e21536 Action:allow Direction:to-lport ExternalIDs:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:0 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:0] Label:0 Log:false Match:(ip4.dst == 172.25.75.11/32) && ip4.src == $a4322231855293774466 && ((tcp && ( tcp.dst == 8888 ))) Meter:0xc00092eb70 Name:0xc00092eb80 Options:map[] Priority:10000 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="48a8cb2c-2046-4347-87a3-356856e21536"
I0711 07:51:35.781138      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="884c23b8-5789-4d72-b184-28914f0bb0b2"
I0711 07:51:35.781179      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:884c23b8-5789-4d72-b184-28914f0bb0b2 Action:allow Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow] Label:0 Log:false Match:inport == @a11718373952692888238_egressDefaultDeny && (arp || nd) Meter:0xc00092eef0 Name:0xc00092ef00 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:884c23b8-5789-4d72-b184-28914f0bb0b2 Action:allow Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow] Label:0 Log:false Match:inport == @a11718373952692888238_egressDefaultDeny && (arp || nd) Meter:0xc00092ef10 Name:0xc00092ef20 Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="884c23b8-5789-4d72-b184-28914f0bb0b2"
I0711 07:51:35.781194      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="9c36a2f5-8486-42d9-b79c-af79e78ff3d4"
I0711 07:51:35.781233      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:9c36a2f5-8486-42d9-b79c-af79e78ff3d4 Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.0.2 Meter:0xc00092f310 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:9c36a2f5-8486-42d9-b79c-af79e78ff3d4 Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-worker:10.244.0.2 k8s.ovn.org/name:ovn-worker k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.0.2 Meter:0xc00092f320 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="9c36a2f5-8486-42d9-b79c-af79e78ff3d4"
I0711 07:51:35.781254      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="e7835318-3717-45e2-9716-ef3f8c236f51"
I0711 07:51:35.781301      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:e7835318-3717-45e2-9716-ef3f8c236f51 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.src == {$a1822410377753831280} && outport == @a14627396333488653719 Meter:0xc00092f680 Name:0xc00092f690 Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:e7835318-3717-45e2-9716-ef3f8c236f51 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-foo2:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-foo2 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.src == {$a1822410377753831280} && outport == @a14627396333488653719 Meter:0xc00092f6a0 Name:0xc00092f6b0 Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="e7835318-3717-45e2-9716-ef3f8c236f51"
I0711 07:51:35.781315      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="f4ab3018-8beb-4ab6-9fd2-b655247f48ac"
I0711 07:51:35.781351      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:f4ab3018-8beb-4ab6-9fd2-b655247f48ac Action:allow Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow] Label:0 Log:false Match:outport == @a11718373952692888238_ingressDefaultDeny && (arp || nd) Meter:0xc00092fae0 Name:0xc00092faf0 Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:f4ab3018-8beb-4ab6-9fd2-b655247f48ac Action:allow Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:arpAllow k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:arpAllow] Label:0 Log:false Match:outport == @a11718373952692888238_ingressDefaultDeny && (arp || nd) Meter:0xc00092fb00 Name:0xc00092fb10 Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="f4ab3018-8beb-4ab6-9fd2-b655247f48ac"
I0711 07:51:35.781363      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="3bdd27b4-90fc-41c7-ae78-0405af3e0eb3"
I0711 07:51:35.781407      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:3bdd27b4-90fc-41c7-ae78-0405af3e0eb3 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault] Label:0 Log:false Match:ip4.src == 169.254.169.5 Meter:0xc00092fec0 Name:<nil> Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:3bdd27b4-90fc-41c7-ae78-0405af3e0eb3 Action:allow-related Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault] Label:0 Log:false Match:ip4.src == 169.254.169.5 Meter:0xc00092fed0 Name:<nil> Options:map[apply-after-lb:true] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="3bdd27b4-90fc-41c7-ae78-0405af3e0eb3"
I0711 07:51:35.781427      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="6eed1ca4-db38-41cf-a2a3-076897535a06"
I0711 07:51:35.781502      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:6eed1ca4-db38-41cf-a2a3-076897535a06 Action:drop Direction:to-lport ExternalIDs:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1] Label:0 Log:false Match:(ip4.dst == 0.0.0.0/0 && ip4.dst != 10.244.0.0/16) && ip4.src == $a4322231855293774466 Meter:0xc000a12270 Name:0xc000a12280 Options:map[] Priority:9999 Severity:<nil> Tier:2}" "old"="&{UUID:6eed1ca4-db38-41cf-a2a3-076897535a06 Action:drop Direction:to-lport ExternalIDs:map[k8s.ovn.org/id:default-network-controller:EgressFirewall:default:1 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressFirewall rule-index:1] Label:0 Log:false Match:(ip4.dst == 0.0.0.0/0 && ip4.dst != 10.244.0.0/16) && ip4.src == $a4322231855293774466 Meter:0xc000a12290 Name:0xc000a122a0 Options:map[] Priority:9999 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="6eed1ca4-db38-41cf-a2a3-076897535a06"
I0711 07:51:35.781519      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="9fa36da4-8cb4-4286-a00e-ba62fa5c80d7"
I0711 07:51:35.781562      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:9fa36da4-8cb4-4286-a00e-ba62fa5c80d7 Action:drop Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny] Label:0 Log:false Match:outport == @a11718373952692888238_ingressDefaultDeny Meter:0xc000a12610 Name:0xc000a12620 Options:map[] Priority:1000 Severity:<nil> Tier:2}" "old"="&{UUID:9fa36da4-8cb4-4286-a00e-ba62fa5c80d7 Action:drop Direction:to-lport ExternalIDs:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Ingress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny] Label:0 Log:false Match:outport == @a11718373952692888238_ingressDefaultDeny Meter:0xc000a12630 Name:0xc000a12640 Options:map[] Priority:1000 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="9fa36da4-8cb4-4286-a00e-ba62fa5c80d7"
I0711 07:51:35.781582      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="b1475e2d-aedb-4018-9a07-34179cb72ad9"
I0711 07:51:35.781625      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:b1475e2d-aedb-4018-9a07-34179cb72ad9 Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.2.2 Meter:0xc000a129f0 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:b1475e2d-aedb-4018-9a07-34179cb72ad9 Action:allow-related Direction:to-lport ExternalIDs:map[ip:10.244.2.2 k8s.ovn.org/id:default-network-controller:NetpolNode:ovn-control-plane:10.244.2.2 k8s.ovn.org/name:ovn-control-plane k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode] Label:0 Log:false Match:ip4.src==10.244.2.2 Meter:0xc000a12a00 Name:<nil> Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="b1475e2d-aedb-4018-9a07-34179cb72ad9"
I0711 07:51:35.781647      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="b32fe188-bb11-49cf-bf39-0cfc7361e3c9"
I0711 07:51:35.781693      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:b32fe188-bb11-49cf-bf39-0cfc7361e3c9 Action:drop Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny] Label:0 Log:false Match:inport == @a11718373952692888238_egressDefaultDeny Meter:0xc000a12d60 Name:0xc000a12d70 Options:map[apply-after-lb:true] Priority:1000 Severity:<nil> Tier:2}" "old"="&{UUID:b32fe188-bb11-49cf-bf39-0cfc7361e3c9 Action:drop Direction:from-lport ExternalIDs:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolNamespace:surya5:Egress:defaultDeny k8s.ovn.org/name:surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNamespace type:defaultDeny] Label:0 Log:false Match:inport == @a11718373952692888238_egressDefaultDeny Meter:0xc000a12d80 Name:0xc000a12d90 Options:map[apply-after-lb:true] Priority:1000 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="b32fe188-bb11-49cf-bf39-0cfc7361e3c9"
I0711 07:51:35.781709      58 cache.go:1028] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="dda99940-ee99-4cf9-909c-56f4a1b2dd63"
I0711 07:51:35.781757      58 cache.go:1069] cache "msg"="updated row" "database"="OVN_Northbound" "new"="&{UUID:dda99940-ee99-4cf9-909c-56f4a1b2dd63 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.src == {$a15450058810467113962} && outport == @a3548240021545986166 Meter:0xc000a13180 Name:0xc000a13190 Options:map[] Priority:1001 Severity:<nil> Tier:2}" "old"="&{UUID:dda99940-ee99-4cf9-909c-56f4a1b2dd63 Action:allow-related Direction:to-lport ExternalIDs:map[direction:Ingress gress-index:0 ip-block-index:-1 k8s.ovn.org/id:default-network-controller:NetworkPolicy:surya5:allow-ingress-to-foo4-from-surya5:Ingress:0:-1:-1 k8s.ovn.org/name:surya5:allow-ingress-to-foo4-from-surya5 k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetworkPolicy port-policy-index:-1] Label:0 Log:false Match:ip4.src == {$a15450058810467113962} && outport == @a3548240021545986166 Meter:0xc000a131a0 Name:0xc000a131b0 Options:map[] Priority:1001 Severity:<nil> Tier:0}" "table"="ACL" "uuid"="dda99940-ee99-4cf9-909c-56f4a1b2dd63"
I0711 07:51:35.781806      58 acl_sync.go:187] Updating tier's of all ACLs in cluster took 5.491447ms

its working as expected...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

used the nice batching logic Nadia added for previous updates, so upto 20K ACLs we do in single transact.

// for default deny is in tier0 while 1001 ACL for allow-ing traffic is in tier2 for a given namespace network policy).
// NOTE: This is a one-time operation as no ACLs should ever be created in types.PlaceHolderACLTier moving forward.
// Fetch all ACLs in types.PlaceHolderACLTier (Tier0); update their Tier to 2 and batch the ACL update.
klog.Info("Updating Tier of existing ACLs...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance to have a test for this?

Copy link
Member Author

@tssurya tssurya Jul 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we have tests in the go-controller/pkg/ovn/external_ids_syncer/acl/acl_sync_test.go; specifically here: b53b9f7#diff-fbf87e9a26d9865f6ce3881764044151ed71700db9c0440b1c527acba89024f9R451 and b53b9f7#diff-fbf87e9a26d9865f6ce3881764044151ed71700db9c0440b1c527acba89024f9R593 -> those parts are starting the tests with Placeholder ACLs and then moving them to Tier2 and verifying final state is tier2. That tests this specific code path...
Did you have something else in mind?

Also I tried to write the test to check if lower priorities are moved before higher, but I didn't have bright ideas there..Maybe I need to create like 40K acls and see if the 1st batch all has lower priority something like that..but that would be hard to test when calling syncACLs.. let me know what you think

@tssurya
Copy link
Member Author

tssurya commented Jul 13, 2023

(latest push is just rebase to master... no changes)

@tssurya
Copy link
Member Author

tssurya commented Jul 13, 2023

the multi-homing lane failure seems a flake?:

2023-07-13T07:55:35.8853597Z �[32m• [SLOW TEST:14.357 seconds]�[0m
2023-07-13T07:55:35.8853880Z Multi Homing
2023-07-13T07:55:35.8854320Z �[90m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:27�[0m
2023-07-13T07:55:35.8854855Z   multiple pods connected to the same OVN-K secondary network
2023-07-13T07:55:35.8945882Z   �[90m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:262�[0m
2023-07-13T07:55:35.8946326Z     can communicate over the secondary network
2023-07-13T07:55:35.8947197Z     �[90m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:273�[0m
2023-07-13T07:55:35.8948061Z       can communicate over an localnet secondary network without IPAM when the pods are scheduled on different nodes, with static IPs configured via network selection elements
2023-07-13T07:55:35.8948833Z       �[90m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:589�[0m
2023-07-13T07:55:35.8949228Z �[90m------------------------------�[0m
2023-07-13T07:55:35.8949587Z �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m
2023-07-13T07:55:35.8949906Z �[90m------------------------------�[0m
2023-07-13T07:55:35.8950549Z �[0mMulti Homing�[0m �[90mmultiple pods connected to the same OVN-K secondary network�[0m �[0mmulti-network policies�[0m �[90mmulti-network policies configure traffic allow lists�[0m 
2023-07-13T07:55:35.8951273Z   �[1mfor a routed topology when the multi-net policy describes the allow-list using pod selectors�[0m
2023-07-13T07:55:35.8951858Z   �[37m/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:819�[0m
2023-07-13T07:55:35.8952166Z [BeforeEach] Multi Homing
2023-07-13T07:55:35.8952580Z   /home/runner/go/pkg/mod/k8s.io/[email protected]/test/e2e/framework/framework.go:187
2023-07-13T07:55:35.8952972Z �[1mSTEP�[0m: Creating a kubernetes client
2023-07-13T07:55:35.8953296Z Jul 13 07:55:35.884: INFO: >>> kubeConfig: /home/runner/ovn.conf
2023-07-13T07:55:35.8953698Z �[1mSTEP�[0m: Building a namespace api object, basename multi-homing
2023-07-13T07:55:35.9129728Z �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace
2023-07-13T07:55:35.9207008Z �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace
2023-07-13T07:55:35.9224066Z [BeforeEach] Multi Homing
2023-07-13T07:55:35.9225082Z   /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:49
2023-07-13T07:55:35.9225952Z [BeforeEach] multi-network policies
2023-07-13T07:55:35.9226544Z   /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:663
2023-07-13T07:55:35.9390470Z [It] for a routed topology when the multi-net policy describes the allow-list using pod selectors
2023-07-13T07:55:35.9391148Z   /home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:819
2023-07-13T07:55:35.9391692Z �[1mSTEP�[0m: creating the attachment configuration for namespace "multi-homing-4538"
2023-07-13T07:55:35.9451464Z �[1mSTEP�[0m: creating the attachment configuration for namespace "pepensnp9"
2023-07-13T07:55:35.9580310Z �[1mSTEP�[0m: instantiating the "multi-homing-4538/tinypod" pod
2023-07-13T07:55:36.0029361Z �[1mSTEP�[0m: asserting the pod reaches the `Ready` state
2023-07-13T07:57:36.0045961Z Jul 13 07:57:36.003: FAIL: Timed out after 120.001s.
2023-07-13T07:57:36.0056907Z Expected
2023-07-13T07:57:36.0057202Z     <v1.PodPhase>: Pending
2023-07-13T07:57:36.0057437Z to equal
2023-07-13T07:57:36.0057684Z     <v1.PodPhase>: Running
2023-07-13T07:57:36.0057808Z 
2023-07-13T07:57:36.0057921Z Full Stack Trace
2023-07-13T07:57:36.0058667Z github.com/ovn-org/ovn-kubernetes/test/e2e.glob..func21.3.2.3({{0xc00078b620, 0x10}, {0x0, 0x0, 0x0}, {0x0, 0x0}, {0x1cf9369, 0xb}, {0x1cf0de6, ...}, ...}, ...)
2023-07-13T07:57:36.0059282Z 	/home/runner/work/ovn-kubernetes/ovn-kubernetes/test/e2e/multihoming.go:744 +0x63c
2023-07-13T07:57:36.0059692Z reflect.Value.call({0x1a4ff40?, 0xc000948c30?, 0x0?}, {0x1cee282, 0x4}, {0xc000476700, 0x5, 0x0?})
2023-07-13T07:57:36.0060100Z 	/opt/hostedtoolcache/go/1.19.6/x64/src/reflect/value.go:584 +0x8c5
2023-07-13T07:57:36.0060475Z reflect.Value.Call({0x1a4ff40?, 0xc000948c30?, 0x0?}, {0xc000476700?, 0x0?, 0x0?})
2023-07-13T07:57:36.0061940Z 	/opt/hostedtoolcache/go/1.19.6/x64/src/reflect/value.go:368 +0xbc
2023-07-13T07:57:36.0062458Z github.com/onsi/ginkgo/extensions/table.TableEntry.generateIt.func2()
2023-07-13T07:57:36.0063070Z 	/home/runner/go/pkg/mod/github.com/onsi/[email protected]/extensions/table/table_entry.go:50 +0x31
2023-07-13T07:57:36.0063995Z github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0007c72f0?)
2023-07-13T07:57:36.0064602Z 	/home/runner/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:113 +0xb1
2023-07-13T07:57:36.0064998Z github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0007c75a0?)
2023-07-13T07:57:36.0076020Z 	/home/runner/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:64 +0x125

saving link, will open issue if i see it again: https://github.com/ovn-org/ovn-kubernetes/actions/runs/5540252630/jobs/10112572522?pr=3645

This commit bumps the OVN DB schema to
the new OVN release. In particular we
want to bring in Tiered ACLs construct
to lay out the pre-work for ANPs

Signed-off-by: Surya Seetharaman <[email protected]>
We have a new feature called Hierarchical
ACLs that is introduced in OVN to enable
support for tiered ACLs. This commit ensures
that from this point on, all ACLs for all features
are created in tier2. By default all
new ACLs must be added to tier2.

Ensure existing ACLs without tiers are migrated post upgrade

Since the column in NBDB is an int,
when OVN schema upgrade happens, by default
the value for this column will be set to 0.

We want all existing ACLs to move to tier2.
This commit ensures all existing ACLs for
all features are migrated towards tier2.
This PR ensures that is done by OVNK controller
upon upgrade restart.

Signed-off-by: Surya Seetharaman <[email protected]>
@tssurya
Copy link
Member Author

tssurya commented Jul 13, 2023

@jcaamano : shall we merge this? CI looking good

@jcaamano jcaamano merged commit d000a80 into ovn-org:master Jul 13, 2023
24 of 25 checks passed
@tssurya tssurya added this to the v1.0.0 milestone Mar 11, 2024
@tssurya tssurya added kind/feature All issues/PRs that are new features feature/admin-network-policy labels Mar 12, 2024
@tssurya tssurya linked an issue Mar 26, 2024 that may be closed by this pull request
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/admin-network-policy kind/feature All issues/PRs that are new features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement Admin Network Policy API in OVNK
7 participants