-
Notifications
You must be signed in to change notification settings - Fork 7
feat: automatically enable Cilium kube-proxy replacement feature #1286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
9c4f374
to
18206bd
Compare
0524c7c
to
314c6a0
Compare
742615f
to
7d79c35
Compare
314c6a0
to
c654ac0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great!
c654ac0
to
9fa1a65
Compare
This value is only needed when kubeProxyReplacement is set. Its also there to force restart the Cilium Pods.
9fa1a65
to
ba3979f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this PR is on stack of #1288, and I see in the PR #1288 (merged) the annotation "skip-kube-proxy" is not added to the file examples/capi-quick-start/nutanix-cluster-failuredomain-cilium-helm-addon.yaml, but added to the file examples/capi-quick-start/nutanix-cluster-cilium-helm-addon.yaml. Is this intentionally? Are these example files only used for e2e tests?
Yes its mainly used in e2e tests and local dev testing, these are not published.
Not really, I will have a follow up to fix it in a separate PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have e2e test for upgrade? Can we add the e2e test for this PR changes when upgrading the cluster?
No we don't have upgrade tests here yet and its a bigger effort to do it. It won't be possible to add it as part of this, but hoping the integration tests are enough. (I've also tested manually) |
{{- with .ControlPlane }} | ||
{{- range $key, $val := .metadata.annotations }} | ||
{{- if eq $key "controlplane.cluster.x-k8s.io/skip-kube-proxy" }} | ||
k8sServiceHost: auto |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this safe only when kubeProxyReplacement
is true? I thought it was always safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added it back because unfortunately just setting kubeProxyReplacement
does not trigger a Pod rollout.
In a follow up I will remove it if the rest of the config for IP preservation does result in new Pods.
Although kubeProxyReplacement
is not really necessary with the current config without kube-proxy replacement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not like this - it's too opaque as to why the necessary pods are being rolled out. We should have a better defined trigger for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our plan is to represent kube-proxy replacement in the API. But this makes sense in the meantime.
What problem does this PR solve?:
This PR enables Cilium's kube-proxy replacement feature automatically when clusters disable kube-proxy installation during upgrades.
The Cilium handler will apply the new configuration, wait for the DaemonSet to be rolled out and then delete the kube-proxy DaemonSet and its ConfigMap.
Which issue(s) this PR fixes:
Fixes #
How Has This Been Tested?:
Special notes for your reviewer:
Stacked on #1288