-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support for static AzureCNI without overlay networking via generating additional ip configurations #365
base: main
Are you sure you want to change the base?
feat: Support for static AzureCNI without overlay networking via generating additional ip configurations #365
Conversation
…or non azure cni overlay setups
…ed constants in the project
…on of propagating kubelet configuration this way
…rlay configuration
… know have two tests checking cluster state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/test
…gian/aks-karpenter into bsoghigian/azurecni-v1-clean
pkg/providers/instance/instance.go
Outdated
type createNICOptions struct { | ||
NICName string | ||
BackendPools *loadbalancer.BackendAddressPools | ||
InstanceType *corecloudprovider.InstanceType | ||
LaunchTemplate *launchtemplate.Template | ||
NetworkPluginMode string | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There probably exists a better pattern for this that splits the population of the creation options and the actual reading of those options. Its a bit messy here because we set some values like BackendPools after the fact rather than initially.
Also creation options as a pattern could be more widely leveraged for VM creation as well. This was feedback placed here originally to split up some options rather than just using context as a global retrieval object.
But seems like a lot to tackle restructuring the entire dataflow for the scope of this PR. It deserves its own proper refactoring PR.
1413fe8
to
36cc5a2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/test
…etworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
d2dbd58
to
5d88685
Compare
From what I can tell, in terms of the contract there are two places where NETWORK_PLUGIN is used installNetworkPlugin() {
if [[ "${NETWORK_PLUGIN}" = "azure" ]]; then
installAzureCNI
fi
installCNI #reference plugins. Mostly for kubenet but loop back used by contaierd until containerd 2
rm -rf $CNI_DOWNLOADS_DIR &
} configureCNIIPTables() {
if [[ "${NETWORK_PLUGIN}" = "azure" ]]; then
mv $CNI_BIN_DIR/10-azure.conflist $CNI_CONFIG_DIR/
chmod 600 $CNI_CONFIG_DIR/10-azure.conflist
if [[ "${NETWORK_POLICY}" == "calico" ]]; then
sed -i 's#"mode":"bridge"#"mode":"transparent"#g' $CNI_CONFIG_DIR/10-azure.conflist
elif [[ "${NETWORK_POLICY}" == "" || "${NETWORK_POLICY}" == "none" ]] && [[ "${NETWORK_MODE}" == "transparent" ]]; then
sed -i 's#"mode":"bridge"#"mode":"transparent"#g' $CNI_CONFIG_DIR/10-azure.conflist
fi
/sbin/ebtables -t nat --list
fi
} but we need it to support BYO CNI it seems based on these two references in Agentbaker |
After adding the NETWORK-POLICY variable, Karpenter should be able to be used in underlay network and Azure Cni AKS |
Fixes #367
Description
This PR adds support for azure cni without overlay, as well as introduces some makefile goodness for creating clusters or other cni configurations.
Why Do We Need Secondary IP Configs For AZ CNI Without Overlay?
When a pod is created, the Azure CNI plugin allocates an IP address from the pool of secondary IP addresses configured on the NIC of the node where the pod is scheduled. The Azure CNI plugin manages the allocation and de-allocation of these IP addresses through the IP Address Manager (IPAM), ensuring each pod receives a unique IP address and tracking the usage of these addresses.
In this setup, pods are assigned IP addresses from the node's subnet, allowing for direct IP connectivity. This enables pods within the same virtual network to communicate without the need for Network Address Translation (NAT). The node's NIC routes traffic to the appropriate pod based on the assigned IP.
Flow
veth
pair interfaces that are added to the host network.Learn more about specifics here
How was this change tested?
What this PR does not include
Does this change impact docs?
Release Note