Skip to content

Extended.[k8s.io] Services should be able to create a functioning NodePort service #13108

@smarterclayton

Description

@smarterclayton

https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_gce/965/testReport/junit/(root)/Extended/_k8s_io__Services_should_be_able_to_create_a_functioning_NodePort_service/

Never seen this flake before (in roughly 600 runs), looks like the port never became available in 5 minutes. This is openshift default SDN setup

/tmp/openshift/tito/rpmbuild-originz5K5EQ/BUILD/origin-1.5.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:485
Feb 24 06:46:56.965: expected node port 30633 to be in use, stdout: . err: error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest965.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig exec --namespace=e2e-tests-services-wd35q hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :30633' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1] []  <nil>   [] <nil> 0xc421bed7d0 exit status 1 <nil> <nil> true [0xc420a1a628 0xc420a1a640 0xc420a1a658] [0xc420a1a628 0xc420a1a640 0xc420a1a658] [0xc420a1a638 0xc420a1a650] [0x986eb0 0x986eb0] 0xc421bdaea0 <nil>}:
Command stdout:

stderr:

error:
exit status 1

/tmp/openshift/tito/rpmbuild-originz5K5EQ/BUILD/origin-1.5.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:483

Very weird

STEP: Building a namespace api object
Feb 24 06:41:33.468: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Services
  /tmp/openshift/tito/rpmbuild-originz5K5EQ/BUILD/origin-1.5.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:90
[It] should be able to create a functioning NodePort service
  /tmp/openshift/tito/rpmbuild-originz5K5EQ/BUILD/origin-1.5.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:485
STEP: creating service nodeport-test with type=NodePort in namespace e2e-tests-services-wd35q
STEP: creating pod to be part of service nodeport-test
Feb 24 06:41:34.470: INFO: Waiting up to 2m0s for 1 pods to be created
Feb 24 06:41:34.522: INFO: Found 0/1 pods - will retry
Feb 24 06:41:36.562: INFO: Found all 1 pods
Feb 24 06:41:36.562: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [nodeport-test-dd39k]
Feb 24 06:41:36.562: INFO: Waiting up to 2m0s for pod nodeport-test-dd39k status to be running and ready
Feb 24 06:41:36.595: INFO: Waiting for pod nodeport-test-dd39k in namespace 'e2e-tests-services-wd35q' status to be 'running and ready'(found phase: "Pending", readiness: false) (32.700389ms elapsed)
Feb 24 06:41:38.682: INFO: Waiting for pod nodeport-test-dd39k in namespace 'e2e-tests-services-wd35q' status to be 'running and ready'(found phase: "Pending", readiness: false) (2.11980505s elapsed)
Feb 24 06:41:40.716: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nodeport-test-dd39k]
STEP: hitting the pod through the service's NodePort
Feb 24 06:41:40.716: INFO: Testing HTTP reachability of http://104.154.147.57:30633/echo?msg=hello
Feb 24 06:41:45.716: INFO: Got error testing for reachability of http://104.154.147.57:30633/echo?msg=hello: Get http://104.154.147.57:30633/echo?msg=hello: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 24 06:41:47.717: INFO: Testing HTTP reachability of http://104.154.147.57:30633/echo?msg=hello
STEP: verifying the node port is locked
Feb 24 06:41:51.599: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.prtest965.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig exec --namespace=e2e-tests-services-wd35q hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :30633' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1'
Feb 24 06:46:56.965: INFO: rc: 127
Feb 24 06:46:56.965: INFO: expected node port 30633 to be in use, stdout: . err: error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest965.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig exec --namespace=e2e-tests-services-wd35q hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :30633' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1] []  <nil>   [] <nil> 0xc421bed7d0 exit status 1 <nil> <nil> true [0xc420a1a628 0xc420a1a640 0xc420a1a658] [0xc420a1a628 0xc420a1a640 0xc420a1a658] [0xc420a1a638 0xc420a1a650] [0x986eb0 0x986eb0] 0xc421bdaea0 <nil>}:
Command stdout:

stderr:

error:
exit status 1

[AfterEach] [k8s.io] Services
  /tmp/openshift/tito/rpmbuild-originz5K5EQ/BUILD/origin-1.5.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-services-wd35q".
STEP: Found 11 events.
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:34 -0500 EST - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-dd39k
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:34 -0500 EST - event for nodeport-test-dd39k: {default-scheduler } Scheduled: Successfully assigned nodeport-test-dd39k to ci-prtest965-ig-n-fsv9
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:36 -0500 EST - event for nodeport-test-dd39k: {kubelet ci-prtest965-ig-n-fsv9} Created: Created container with docker id 25b24e578d03; Security:[seccomp=unconfined]
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:36 -0500 EST - event for nodeport-test-dd39k: {kubelet ci-prtest965-ig-n-fsv9} Started: Started container with docker id 25b24e578d03
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:36 -0500 EST - event for nodeport-test-dd39k: {kubelet ci-prtest965-ig-n-fsv9} Pulled: Container image "gcr.io/google_containers/netexec:1.7" already present on machine
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:47 -0500 EST - event for hostexec: {default-scheduler } Scheduled: Successfully assigned hostexec to ci-prtest965-ig-n-fsv9
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:49 -0500 EST - event for hostexec: {kubelet ci-prtest965-ig-n-fsv9} Pulled: Container image "gcr.io/google_containers/hostexec:1.2" already present on machine
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:50 -0500 EST - event for hostexec: {kubelet ci-prtest965-ig-n-fsv9} Created: Created container with docker id 86e8055fe394; Security:[seccomp=unconfined]
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:50 -0500 EST - event for hostexec: {kubelet ci-prtest965-ig-n-fsv9} Started: Started container with docker id 86e8055fe394
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:41:59 -0500 EST - event for nodeport-test-dd39k: {kubelet ci-prtest965-ig-n-fsv9} Unhealthy: Readiness probe failed: Get http://172.16.4.40:80/hostName: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Feb 24 06:46:56.997: INFO: At 2017-02-24 06:45:10 -0500 EST - event for nodeport-test-dd39k: {kubelet ci-prtest965-ig-n-fsv9} Unhealthy: Readiness probe failed: Get http://172.16.4.40:80/hostName: dial tcp 172.16.4.40:80: getsockopt: no route to host
Feb 24 06:46:57.090: INFO: POD      

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions