-
Notifications
You must be signed in to change notification settings - Fork 4.7k
CORENET-6232: add PreconfiguredUDNAddresses FG tests #30010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CORENET-6232: add PreconfiguredUDNAddresses FG tests #30010
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is really nice, thanks for that.
I just have some questions, nits, nothing big.
Let's see how this fares on CI. Fingers crossed.
expectedIPs := strings.Split(netConfig.preconfiguredIP, ",") | ||
for _, expectedIP := range expectedIPs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is opinionated; couldn't you inline these ? like
expectedIPs := strings.Split(netConfig.preconfiguredIP, ",") | |
for _, expectedIP := range expectedIPs { | |
for _, expectedIP := range strings.Split(netConfig.preconfiguredIP, ",") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
ShouldNot(BeEmpty()) | ||
Expect(actualMAC).To(Equal(netConfig.preconfiguredMAC)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't it make sense to actually mix these two up ?
Something like
ShouldNot(BeEmpty()) | |
Expect(actualMAC).To(Equal(netConfig.preconfiguredMAC)) | |
Should(Equal(netConfig.preconfiguredMAC)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
totally.
GinkgoHelper() | ||
|
||
var err error | ||
postMigrationMAC, err = obtainMAC(virtClient, vmName) | ||
g.Expect(err).NotTo(HaveOccurred(), "Failed to obtain MAC address for VM after migration") | ||
return postMigrationMAC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe you can export this into an helper func ? which would be re-used in the other test - AFAIU, the only difference at play is the name of the variable where you put the MAC addr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
func macFromStatus(cli *kubevirt.Client, vmName string) (string, error) { | ||
macStr, err := cli.GetJSONPath("vmi", vmName, "{@.status.interfaces[0].mac}") | ||
if err != nil { | ||
return "", fmt.Errorf("failed to extract the MAC address from VM %q: %w", vmName, err) | ||
} | ||
return macStr, nil | ||
} | ||
|
||
func obtainMAC(virtClient *kubevirt.Client, vmName string) (string, error) { | ||
return macFromStatus(virtClient, vmName) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we have these 2 fns ? one just calls the other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was following the IP obtain function
func obtainAddresses(virtClient *kubevirt.Client, vmName string) ([]string, error) {
return addressFromStatus(virtClient, vmName)
}
but might as well unify the calls into one. DONE.
return macStr, nil | ||
} | ||
|
||
func obtainMAC(virtClient *kubevirt.Client, vmName string) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we already have a function to extract the VM's MAC address from the status @qinqon ?
@@ -93,6 +93,29 @@ func (c *Client) GetJSONPath(resource, name, jsonPath string) (string, error) { | |||
} | |||
return strings.TrimSuffix(strings.TrimPrefix(output, `"`), `"`), nil | |||
} | |||
|
|||
func (c *Client) GetPodsByLabel(labelKey, labelValue string) ([]string, error) { | |||
output, err := c.oc.AsAdmin().Run("get").Args("pods", "-n", c.oc.Namespace(), "-l", fmt.Sprintf("%s=%s", labelKey, labelValue), "-o", "jsonpath={.items[*].metadata.name}").Output() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: do we really need to be "admin" for this ? I assume yes, since we're entering arbitrary namespaces though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume so, will check once I have a cluster running.
duplicateVMName := vmName + "-duplicate" | ||
By(fmt.Sprintf("Duplicating VM %s/%s to %s/%s", vmNamespace, vmName, vmNamespace, duplicateVMName)) | ||
|
||
vmiSpecJSON, err := cli.GetJSONPath("vmi", vmName, "{.spec}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't you do this without reading from the API ? I mean, just do some oldVMI.DeepCopy()
kind of thing ?
I think you'd need to turn duplicateVM
into an higher order function though - you'd pass the VM from the test into the function returned by this higher order fn.
Opinionated nit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, the thing is - If I want to reuse this DescribeTable - I need the function to fit opCmd:
opCmd func(cli *kubevirt.Client, vmNamespace, vmName string)
So in order to do so I had to duplicate the VM with only these params.
I kinda think it's keeps things tidy, but if you prefer I can copy this big DescribeTable to a standalone test where I can create the duplicate VM is a more standard way.
When I tried it I had to create a bunch of helpers to try and avoid duplicate code from the current test, and it ended up less nice and tidy than the current way imo..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's keep it as is then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I understand why this fails. You only copy the spec of the src VM. The IP requests are configured in the VMI's metadata, which you haven't carried over.
I think we need to ensure the proper annotation is copied as well.
EDIT: confirmed.
Check out these logs from the ipam-ext component:
2025-08-11T14:14:11.909647706Z 2025-08-11T14:14:11Z INFO admission new pod annotations {"object": {"name":"","namespace":"e2e-network-segmentation-e2e-1367"}, "namespace": "e2e-network-segmentation-e2e-1367", "name": "", "resource": {"group":"","version":"v1","resource":"pods"}, "user": "system:serviceaccount:openshift-cnv:kubevirt-controller", "requestID": "bc07f063-4d02-4e9d-87e5-64318895e435", "pod": {"descheduler.alpha.kubernetes.io/request-evict-only":"","k8s.ovn.org/primary-udn-ipamclaim":"myvm.overlay","kubectl.kubernetes.io/default-container":"compute","kubevirt.io/domain":"myvm","kubevirt.io/migrationTransportUnix":"true","kubevirt.io/vm-generation":"1","network.kubevirt.io/addresses":"{\"overlay\":[\"203.203.0.100\",\"2014:100:200::100\"]}","openshift.io/scc":"kubevirt-controller","post.hook.backup.velero.io/command":"[\"/usr/bin/virt-freezer\", \"--unfreeze\", \"--name\", \"myvm\", \"--namespace\", \"e2e-network-segmentation-e2e-1367\"]","post.hook.backup.velero.io/container":"compute","pre.hook.backup.velero.io/command":"[\"/usr/bin/virt-freezer\", \"--freeze\", \"--name\", \"myvm\", \"--namespace\", \"e2e-network-segmentation-e2e-1367\"]","pre.hook.backup.velero.io/container":"compute","seccomp.security.alpha.kubernetes.io/pod":"localhost/kubevirt/kubevirt.json","security.openshift.io/validated-scc-subject-type":"user","v1.multus-cni.io/default-network":"[{\"name\":\"default\",\"namespace\":\"openshift-ovn-kubernetes\",\"ips\":[\"203.203.0.100/16\",\"2014:100:200::100/60\"],\"ipam-claim-reference\":\"myvm.overlay\"}]"}}
2025-08-11T14:16:00.883938764Z 2025-08-11T14:16:00Z INFO admission new pod annotations {"object": {"name":"","namespace":"e2e-network-segmentation-e2e-1367"}, "namespace": "e2e-network-segmentation-e2e-1367", "name": "", "resource": {"group":"","version":"v1","resource":"pods"}, "user": "system:serviceaccount:openshift-cnv:kubevirt-controller", "requestID": "ad8a55f3-f8ef-42a2-b134-cac7a6fd4b3f", "pod": {"descheduler.alpha.kubernetes.io/request-evict-only":"","k8s.ovn.org/primary-udn-ipamclaim":"myvm-duplicate.overlay","kubectl.kubernetes.io/default-container":"compute","kubevirt.io/domain":"myvm-duplicate","kubevirt.io/migrationTransportUnix":"true","openshift.io/scc":"kubevirt-controller","post.hook.backup.velero.io/command":"[\"/usr/bin/virt-freezer\", \"--unfreeze\", \"--name\", \"myvm-duplicate\", \"--namespace\", \"e2e-network-segmentation-e2e-1367\"]","post.hook.backup.velero.io/container":"compute","pre.hook.backup.velero.io/command":"[\"/usr/bin/virt-freezer\", \"--freeze\", \"--name\", \"myvm-duplicate\", \"--namespace\", \"e2e-network-segmentation-e2e-1367\"]","pre.hook.backup.velero.io/container":"compute","seccomp.security.alpha.kubernetes.io/pod":"localhost/kubevirt/kubevirt.json","security.openshift.io/validated-scc-subject-type":"user"}}
Only the "original" VM has the network.kubevirt.io/addresses
annotation. We need to copy that from the src VM as well.
I'll be pushing an update.
return []string{} | ||
} | ||
|
||
virtLauncherPodName := podNames[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, we're "safe" in the sense we won't see > 1 pod here since your get pods by label function checks pods in the test's namespace, right ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that, and the fact that I don't expect this VM to be migrating.
3ad3231
to
8d6569e
Compare
/test e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview |
Job Failure Risk Analysis for sha: 8d6569e
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: 8d6569e
New tests seen in this PR at sha: 8d6569e
|
/testwith e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
@maiqueb,
|
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
Image failed to build: |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
8d6569e
to
be08098
Compare
Change: fix kubevirt.io/network/addresses annotation expected value format. |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
Job Failure Risk Analysis for sha: be08098
|
@qinqon I dug in the must gather files a bit, could this be relevant to your PR?
|
This is not related to my PR, this is ipam extensions creating the default-network annotation with wrong namespace, and that means we are missing the openshift virt integratin that configure ipam extensions with "openshift-ovn-kubernetes" namespace instead of "ovn-kubernetes". |
ah. so we need to wait for that in order to make the tests pass.. ACK. |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
be08098
to
44f8b30
Compare
Change: rebase |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
Job Failure Risk Analysis for sha: 44f8b30
|
Last run was affected by: openshift/machine-config-operator#5213 /testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
Last ran failed early on cluster creation, it seems that it was still affected openshift/machine-config-operator#5213 /testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
/testwith openshift/origin/main/e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview openshift/cluster-network-operator#2743 openshift/ovn-kubernetes#2666 |
cluster failed to create. |
5bd6c8c
to
f13e322
Compare
Change: Rebase (no conflicts) |
This updates the tests to be reported matrix. Signed-off-by: Miguel Duarte Barroso <[email protected]>
f13e322
to
b106650
Compare
/test e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview |
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
1 similar comment
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
This PR adds tests in |
/assign @bertinatto the openshift bot told us to assign you. Please review :) Thanks in advance. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgoodwin, kyrtapz, qinqon, RamLavi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
2 similar comments
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
/retest-required |
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
@RamLavi: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
/retest-required |
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
1 similar comment
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b106650
New tests seen in this PR at sha: b106650
|
67932bb
into
openshift:main
This PR adds tests to check the PreconfiguredUDNAddresses FG: