diff --git a/v1.14/rancher/PRODUCT.yaml b/v1.14/rancher/PRODUCT.yaml new file mode 100644 index 0000000000..ca058be0b3 --- /dev/null +++ b/v1.14/rancher/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: Rancher Inc. +name: Rancher Kubernetes +version: v2.2.5-rc5 +website_url: https://rancher.com/kubernetes/ +documentation_url: https://rancher.com/docs/rancher/v2.x/en/ +product_logo_url: https://rancher.com/img/brand-guidelines/assets/logos/png/color/rancher-logo-horiz-color.png +type: distribution +description: 'Deploy Rancher’s Kubernetes distro anywhere or launch cloud Kubernetes services from Google, Amazon or Microsoft.' diff --git a/v1.14/rancher/README.md b/v1.14/rancher/README.md new file mode 100644 index 0000000000..331d98eb9e --- /dev/null +++ b/v1.14/rancher/README.md @@ -0,0 +1,43 @@ +# Conformance tests for Rancher 2.x Kubernetes + +## Install Rancher Server + +As per [documentation](https://rancher.com/docs/rancher/v2.x/en/installation/) install Rancher server on either a single node or HA mode. + +## Run Kubernetes Cluster + +After running Rancher server, access Rancher server UI at `https://` and create new Cluster, please refer to the [documentation](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/#4-create-the-cluster) for more information about how to create a cluster. + +## Run Conformance Test + +1. Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally. + +2. Download a sonobuoy [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI, or build it yourself by running: +```sh +$ go get -u -v github.com/heptio/sonobuoy +``` + +3. Configure your kubeconfig file by running: +```sh +$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml" +``` + +4. Run sonobuoy: +```sh +$ sonobuoy run +``` + +4. Watch the logs: +```sh +$ sonobuoy logs +``` + +5. Check the status: +```sh +$ sonobuoy status +``` + +6. Once the status commands shows the run as completed, you can download the results tar.gz file: +```sh +$ sonobuoy retrieve +``` diff --git a/v1.14/rancher/e2e.log b/v1.14/rancher/e2e.log new file mode 100644 index 0000000000..88aa68b108 --- /dev/null +++ b/v1.14/rancher/e2e.log @@ -0,0 +1,10471 @@ +I0618 10:27:20.918644 14 test_context.go:405] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-675335780 +I0618 10:27:20.918740 14 e2e.go:240] Starting e2e run "a71dca5f-91b3-11e9-8aef-6ab77b36fff7" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1560853639 - Will randomize all specs +Will run 204 of 3585 specs + +Jun 18 10:27:21.053: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 10:27:21.055: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jun 18 10:27:21.070: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jun 18 10:27:21.093: INFO: The status of Pod rke-coredns-addon-deploy-job-4b9ct is Succeeded, skipping waiting +Jun 18 10:27:21.093: INFO: The status of Pod rke-ingress-controller-deploy-job-697mh is Succeeded, skipping waiting +Jun 18 10:27:21.093: INFO: The status of Pod rke-metrics-addon-deploy-job-f4q28 is Succeeded, skipping waiting +Jun 18 10:27:21.093: INFO: The status of Pod rke-network-plugin-deploy-job-c76n7 is Succeeded, skipping waiting +Jun 18 10:27:21.093: INFO: 6 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jun 18 10:27:21.093: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. +Jun 18 10:27:21.093: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jun 18 10:27:21.100: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'canal' (0 seconds elapsed) +Jun 18 10:27:21.100: INFO: e2e test version: v1.14.3 +Jun 18 10:27:21.101: INFO: kube-apiserver version: v1.14.3 +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:27:21.101: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +Jun 18 10:27:21.141: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jun 18 10:27:21.153: INFO: Waiting up to 5m0s for pod "pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7" in namespace "emptydir-8642" to be "success or failure" +Jun 18 10:27:21.156: INFO: Pod "pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.189924ms +Jun 18 10:27:23.160: INFO: Pod "pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007051478s +STEP: Saw pod success +Jun 18 10:27:23.160: INFO: Pod "pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:27:23.163: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:27:23.191: INFO: Waiting for pod pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:27:23.194: INFO: Pod pod-a7c5819b-91b3-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:27:23.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8642" for this suite. +Jun 18 10:27:29.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:27:29.373: INFO: namespace emptydir-8642 deletion completed in 6.1729787s + +• [SLOW TEST:8.273 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:27:29.373: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:27:29.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6337" for this suite. +Jun 18 10:27:35.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:27:35.539: INFO: namespace services-6337 deletion completed in 6.124000394s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:6.166 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:27:35.540: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:27:41.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6110" for this suite. +Jun 18 10:27:47.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:27:47.795: INFO: namespace namespaces-6110 deletion completed in 6.123955078s +STEP: Destroying namespace "nsdeletetest-1115" for this suite. +Jun 18 10:27:47.797: INFO: Namespace nsdeletetest-1115 was already deleted +STEP: Destroying namespace "nsdeletetest-3129" for this suite. +Jun 18 10:27:53.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:27:53.917: INFO: namespace nsdeletetest-3129 deletion completed in 6.119825995s + +• [SLOW TEST:18.378 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected combined + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:27:53.917: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-projected-all-test-volume-bb54a562-91b3-11e9-8aef-6ab77b36fff7 +STEP: Creating secret with name secret-projected-all-test-volume-bb54a51e-91b3-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test Check all projections for projected volume plugin +Jun 18 10:27:53.975: INFO: Waiting up to 5m0s for pod "projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7" in namespace "projected-2595" to be "success or failure" +Jun 18 10:27:53.980: INFO: Pod "projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.443971ms +Jun 18 10:27:55.984: INFO: Pod "projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009387486s +STEP: Saw pod success +Jun 18 10:27:55.984: INFO: Pod "projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:27:55.987: INFO: Trying to get logs from node ip-172-26-30-38 pod projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7 container projected-all-volume-test: +STEP: delete the pod +Jun 18 10:27:56.012: INFO: Waiting for pod projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:27:56.019: INFO: Pod projected-volume-bb54a4d5-91b3-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:27:56.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2595" for this suite. +Jun 18 10:28:02.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:28:02.149: INFO: namespace projected-2595 deletion completed in 6.121920621s + +• [SLOW TEST:8.232 seconds] +[sig-storage] Projected combined +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run default + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:28:02.149: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +[It] should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 10:28:02.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7152' +Jun 18 10:28:02.477: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 18 10:28:02.477: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" +STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created +[AfterEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +Jun 18 10:28:04.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete deployment e2e-test-nginx-deployment --namespace=kubectl-7152' +Jun 18 10:28:04.559: INFO: stderr: "" +Jun 18 10:28:04.559: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:28:04.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7152" for this suite. +Jun 18 10:28:10.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:28:10.688: INFO: namespace kubectl-7152 deletion completed in 6.124483334s + +• [SLOW TEST:8.539 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:28:10.688: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 18 10:28:10.726: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:28:14.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-4443" for this suite. +Jun 18 10:28:20.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:28:20.181: INFO: namespace init-container-4443 deletion completed in 6.125591897s + +• [SLOW TEST:9.492 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:28:20.181: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-3624 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 18 10:28:20.222: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 18 10:28:42.309: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.42.2.126:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3624 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 10:28:42.309: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 10:28:42.446: INFO: Found all expected endpoints: [netserver-0] +Jun 18 10:28:42.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.42.1.112:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3624 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 10:28:42.450: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 10:28:42.614: INFO: Found all expected endpoints: [netserver-1] +Jun 18 10:28:42.617: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.42.0.129:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3624 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 10:28:42.617: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 10:28:42.777: INFO: Found all expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:28:42.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3624" for this suite. +Jun 18 10:29:04.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:29:04.905: INFO: namespace pod-network-test-3624 deletion completed in 22.123791062s + +• [SLOW TEST:44.725 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:29:04.905: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:29:04.967: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7" in namespace "downward-api-4445" to be "success or failure" +Jun 18 10:29:04.971: INFO: Pod "downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.483471ms +Jun 18 10:29:06.975: INFO: Pod "downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00769362s +STEP: Saw pod success +Jun 18 10:29:06.975: INFO: Pod "downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:29:06.979: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:29:07.000: INFO: Waiting for pod downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:29:07.006: INFO: Pod downwardapi-volume-e5a56464-91b3-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:29:07.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4445" for this suite. +Jun 18 10:29:13.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:29:13.136: INFO: namespace downward-api-4445 deletion completed in 6.125204505s + +• [SLOW TEST:8.231 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:29:13.137: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test substitution in container's args +Jun 18 10:29:13.179: INFO: Waiting up to 5m0s for pod "var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7" in namespace "var-expansion-2471" to be "success or failure" +Jun 18 10:29:13.182: INFO: Pod "var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.04946ms +Jun 18 10:29:15.186: INFO: Pod "var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006993399s +STEP: Saw pod success +Jun 18 10:29:15.186: INFO: Pod "var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:29:15.189: INFO: Trying to get logs from node ip-172-26-17-1 pod var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 10:29:15.210: INFO: Waiting for pod var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:29:15.212: INFO: Pod var-expansion-ea8b86f9-91b3-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:29:15.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2471" for this suite. +Jun 18 10:29:21.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:29:21.351: INFO: namespace var-expansion-2471 deletion completed in 6.13491255s + +• [SLOW TEST:8.214 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:29:21.351: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-configmap-wg8b +STEP: Creating a pod to test atomic-volume-subpath +Jun 18 10:29:21.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wg8b" in namespace "subpath-7777" to be "success or failure" +Jun 18 10:29:21.411: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.757055ms +Jun 18 10:29:23.415: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 2.008133052s +Jun 18 10:29:25.419: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012328036s +Jun 18 10:29:27.423: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 6.016119816s +Jun 18 10:29:29.427: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 8.020240768s +Jun 18 10:29:31.430: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 10.023846163s +Jun 18 10:29:33.434: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 12.027589581s +Jun 18 10:29:35.438: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 14.031845783s +Jun 18 10:29:37.442: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 16.03557855s +Jun 18 10:29:39.446: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 18.039557696s +Jun 18 10:29:41.451: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Running", Reason="", readiness=true. Elapsed: 20.044260321s +Jun 18 10:29:43.454: INFO: Pod "pod-subpath-test-configmap-wg8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047681281s +STEP: Saw pod success +Jun 18 10:29:43.454: INFO: Pod "pod-subpath-test-configmap-wg8b" satisfied condition "success or failure" +Jun 18 10:29:43.457: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-subpath-test-configmap-wg8b container test-container-subpath-configmap-wg8b: +STEP: delete the pod +Jun 18 10:29:43.478: INFO: Waiting for pod pod-subpath-test-configmap-wg8b to disappear +Jun 18 10:29:43.487: INFO: Pod pod-subpath-test-configmap-wg8b no longer exists +STEP: Deleting pod pod-subpath-test-configmap-wg8b +Jun 18 10:29:43.487: INFO: Deleting pod "pod-subpath-test-configmap-wg8b" in namespace "subpath-7777" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:29:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7777" for this suite. +Jun 18 10:29:49.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:29:49.618: INFO: namespace subpath-7777 deletion completed in 6.122669777s + +• [SLOW TEST:28.267 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Runtime + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:29:49.618: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [k8s.io] Container Runtime + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:30:14.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9953" for this suite. +Jun 18 10:30:20.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:30:21.029: INFO: namespace container-runtime-9953 deletion completed in 6.129637132s + +• [SLOW TEST:31.412 seconds] +[k8s.io] Container Runtime +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + blackbox test + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 + when starting a container that exits + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:30:21.030: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 18 10:30:23.603: INFO: Successfully updated pod "labelsupdate1303c2ea-91b4-11e9-8aef-6ab77b36fff7" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:30:27.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9262" for this suite. +Jun 18 10:30:49.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:30:49.754: INFO: namespace downward-api-9262 deletion completed in 22.123967283s + +• [SLOW TEST:28.725 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:30:49.754: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-x4js8 in namespace proxy-3698 +I0618 10:30:49.808214 14 runners.go:184] Created replication controller with name: proxy-service-x4js8, namespace: proxy-3698, replica count: 1 +I0618 10:30:50.858645 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0618 10:30:51.858940 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0618 10:30:52.859161 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:53.859387 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:54.859669 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:55.859911 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:56.860245 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:57.860486 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:58.860750 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0618 10:30:59.861008 14 runners.go:184] proxy-service-x4js8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 18 10:30:59.864: INFO: setup took 10.074399132s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 6.391316ms) +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 6.554244ms) +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.588576ms) +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.85806ms) +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 6.734284ms) +Jun 18 10:30:59.871: INFO: (0) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 7.15704ms) +Jun 18 10:30:59.875: INFO: (0) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 10.476588ms) +Jun 18 10:30:59.875: INFO: (0) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 10.52406ms) +Jun 18 10:30:59.875: INFO: (0) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 10.604513ms) +Jun 18 10:30:59.875: INFO: (0) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 10.772835ms) +Jun 18 10:30:59.875: INFO: (0) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 10.688129ms) +Jun 18 10:30:59.877: INFO: (0) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 12.607224ms) +Jun 18 10:30:59.877: INFO: (0) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 13.122789ms) +Jun 18 10:30:59.877: INFO: (0) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 13.163964ms) +Jun 18 10:30:59.878: INFO: (0) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 4.243536ms) +Jun 18 10:30:59.885: INFO: (1) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 4.663612ms) +Jun 18 10:30:59.885: INFO: (1) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 4.701287ms) +Jun 18 10:30:59.886: INFO: (1) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 5.857093ms) +Jun 18 10:30:59.887: INFO: (1) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.964975ms) +Jun 18 10:30:59.887: INFO: (1) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.230075ms) +Jun 18 10:30:59.887: INFO: (1) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 6.854385ms) +Jun 18 10:30:59.887: INFO: (1) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 6.965812ms) +Jun 18 10:30:59.887: INFO: (1) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 6.963316ms) +Jun 18 10:30:59.889: INFO: (1) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 7.876886ms) +Jun 18 10:30:59.889: INFO: (1) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 8.135928ms) +Jun 18 10:30:59.890: INFO: (1) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 8.95729ms) +Jun 18 10:30:59.895: INFO: (2) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 4.858776ms) +Jun 18 10:30:59.895: INFO: (2) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.189298ms) +Jun 18 10:30:59.895: INFO: (2) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 4.938528ms) +Jun 18 10:30:59.895: INFO: (2) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.552126ms) +Jun 18 10:30:59.895: INFO: (2) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test<... (200; 6.553043ms) +Jun 18 10:30:59.897: INFO: (2) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 6.931998ms) +Jun 18 10:30:59.898: INFO: (2) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 8.093833ms) +Jun 18 10:30:59.898: INFO: (2) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 8.145585ms) +Jun 18 10:30:59.898: INFO: (2) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 8.38723ms) +Jun 18 10:30:59.898: INFO: (2) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.336345ms) +Jun 18 10:30:59.901: INFO: (3) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 3.150174ms) +Jun 18 10:30:59.903: INFO: (3) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 4.353637ms) +Jun 18 10:30:59.903: INFO: (3) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 4.558213ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.537064ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 5.719221ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.7351ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 5.877641ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 5.890527ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.907051ms) +Jun 18 10:30:59.904: INFO: (3) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 4.67875ms) +Jun 18 10:30:59.913: INFO: (4) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 4.818217ms) +Jun 18 10:30:59.913: INFO: (4) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.208406ms) +Jun 18 10:30:59.914: INFO: (4) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.91278ms) +Jun 18 10:30:59.914: INFO: (4) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 5.869065ms) +Jun 18 10:30:59.914: INFO: (4) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test<... (200; 6.195536ms) +Jun 18 10:30:59.916: INFO: (4) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.203126ms) +Jun 18 10:30:59.916: INFO: (4) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 7.363909ms) +Jun 18 10:30:59.916: INFO: (4) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 7.550678ms) +Jun 18 10:30:59.917: INFO: (4) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 8.452701ms) +Jun 18 10:30:59.917: INFO: (4) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 8.65198ms) +Jun 18 10:30:59.917: INFO: (4) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 8.748621ms) +Jun 18 10:30:59.920: INFO: (5) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 3.357568ms) +Jun 18 10:30:59.922: INFO: (5) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.147346ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test<... (200; 5.271483ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.379134ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 5.57272ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.669919ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 5.701203ms) +Jun 18 10:30:59.923: INFO: (5) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.816212ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 7.207475ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 7.347954ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 7.362992ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.39694ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 7.406169ms) +Jun 18 10:30:59.925: INFO: (5) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 7.910551ms) +Jun 18 10:30:59.931: INFO: (6) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 5.607281ms) +Jun 18 10:30:59.931: INFO: (6) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.805478ms) +Jun 18 10:30:59.931: INFO: (6) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.914157ms) +Jun 18 10:30:59.931: INFO: (6) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 6.061975ms) +Jun 18 10:30:59.933: INFO: (6) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 8.038182ms) +Jun 18 10:30:59.933: INFO: (6) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 8.047392ms) +Jun 18 10:30:59.933: INFO: (6) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 7.937697ms) +Jun 18 10:30:59.933: INFO: (6) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 8.16415ms) +Jun 18 10:30:59.934: INFO: (6) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 8.197391ms) +Jun 18 10:30:59.934: INFO: (6) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.441277ms) +Jun 18 10:30:59.934: INFO: (6) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 8.462302ms) +Jun 18 10:30:59.934: INFO: (6) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 8.771991ms) +Jun 18 10:30:59.934: INFO: (6) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 8.82122ms) +Jun 18 10:30:59.939: INFO: (7) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 4.866963ms) +Jun 18 10:30:59.940: INFO: (7) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.57467ms) +Jun 18 10:30:59.940: INFO: (7) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.515889ms) +Jun 18 10:30:59.940: INFO: (7) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.758934ms) +Jun 18 10:30:59.940: INFO: (7) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 6.009524ms) +Jun 18 10:30:59.941: INFO: (7) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 6.222494ms) +Jun 18 10:30:59.941: INFO: (7) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 6.816653ms) +Jun 18 10:30:59.941: INFO: (7) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 6.852671ms) +Jun 18 10:30:59.942: INFO: (7) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 7.041542ms) +Jun 18 10:30:59.942: INFO: (7) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 7.05159ms) +Jun 18 10:30:59.942: INFO: (7) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.106418ms) +Jun 18 10:30:59.950: INFO: (8) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test<... (200; 6.033566ms) +Jun 18 10:30:59.951: INFO: (8) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 6.185326ms) +Jun 18 10:30:59.951: INFO: (8) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 6.285857ms) +Jun 18 10:30:59.951: INFO: (8) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 6.381019ms) +Jun 18 10:30:59.952: INFO: (8) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.160685ms) +Jun 18 10:30:59.953: INFO: (8) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.137044ms) +Jun 18 10:30:59.953: INFO: (8) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 8.170502ms) +Jun 18 10:30:59.955: INFO: (8) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 9.966449ms) +Jun 18 10:30:59.955: INFO: (8) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 9.921285ms) +Jun 18 10:30:59.959: INFO: (9) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 3.912015ms) +Jun 18 10:30:59.960: INFO: (9) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 5.535454ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 5.773203ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 5.681064ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.888654ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.954328ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.963149ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.088048ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 6.314717ms) +Jun 18 10:30:59.961: INFO: (9) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 4.314783ms) +Jun 18 10:30:59.971: INFO: (10) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 4.391271ms) +Jun 18 10:30:59.972: INFO: (10) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.932089ms) +Jun 18 10:30:59.972: INFO: (10) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.556015ms) +Jun 18 10:30:59.972: INFO: (10) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 5.825407ms) +Jun 18 10:30:59.973: INFO: (10) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 7.229976ms) +Jun 18 10:30:59.973: INFO: (10) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.560359ms) +Jun 18 10:30:59.973: INFO: (10) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 7.318276ms) +Jun 18 10:30:59.974: INFO: (10) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 7.688107ms) +Jun 18 10:30:59.974: INFO: (10) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 7.495207ms) +Jun 18 10:30:59.974: INFO: (10) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.805575ms) +Jun 18 10:30:59.975: INFO: (10) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.581042ms) +Jun 18 10:30:59.979: INFO: (11) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 3.539733ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.075203ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.499673ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.500652ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.85962ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 6.066387ms) +Jun 18 10:30:59.981: INFO: (11) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.210837ms) +Jun 18 10:30:59.982: INFO: (11) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 6.335832ms) +Jun 18 10:30:59.983: INFO: (11) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 7.892814ms) +Jun 18 10:30:59.983: INFO: (11) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 7.948781ms) +Jun 18 10:30:59.985: INFO: (11) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 9.289063ms) +Jun 18 10:30:59.985: INFO: (11) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 9.24227ms) +Jun 18 10:30:59.985: INFO: (11) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 9.465644ms) +Jun 18 10:30:59.985: INFO: (11) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 9.330671ms) +Jun 18 10:30:59.990: INFO: (12) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 4.757449ms) +Jun 18 10:30:59.990: INFO: (12) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 4.617748ms) +Jun 18 10:30:59.990: INFO: (12) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 4.806032ms) +Jun 18 10:30:59.990: INFO: (12) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 7.265319ms) +Jun 18 10:30:59.992: INFO: (12) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 7.569837ms) +Jun 18 10:30:59.993: INFO: (12) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 7.753658ms) +Jun 18 10:30:59.993: INFO: (12) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.8176ms) +Jun 18 10:30:59.993: INFO: (12) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 7.962829ms) +Jun 18 10:30:59.994: INFO: (12) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 8.909416ms) +Jun 18 10:30:59.994: INFO: (12) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 9.052206ms) +Jun 18 10:31:00.000: INFO: (13) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 5.516607ms) +Jun 18 10:31:00.000: INFO: (13) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.447879ms) +Jun 18 10:31:00.002: INFO: (13) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.996207ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 8.38498ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 8.479495ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 8.514242ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 8.619046ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 8.560574ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 8.602893ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 8.771467ms) +Jun 18 10:31:00.003: INFO: (13) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 5.194499ms) +Jun 18 10:31:00.010: INFO: (14) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.351958ms) +Jun 18 10:31:00.010: INFO: (14) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.220061ms) +Jun 18 10:31:00.010: INFO: (14) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.027883ms) +Jun 18 10:31:00.010: INFO: (14) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.220354ms) +Jun 18 10:31:00.011: INFO: (14) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.896677ms) +Jun 18 10:31:00.011: INFO: (14) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 5.419646ms) +Jun 18 10:31:00.012: INFO: (14) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 7.839795ms) +Jun 18 10:31:00.014: INFO: (14) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 9.228165ms) +Jun 18 10:31:00.014: INFO: (14) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.622473ms) +Jun 18 10:31:00.014: INFO: (14) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 9.184998ms) +Jun 18 10:31:00.014: INFO: (14) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 9.069207ms) +Jun 18 10:31:00.014: INFO: (14) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 9.147766ms) +Jun 18 10:31:00.034: INFO: (15) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 20.578351ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 21.502103ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 21.992125ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 21.698002ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 21.491736ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 21.471058ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 21.847743ms) +Jun 18 10:31:00.036: INFO: (15) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 22.324172ms) +Jun 18 10:31:00.037: INFO: (15) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 5.548052ms) +Jun 18 10:31:00.046: INFO: (16) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 6.395322ms) +Jun 18 10:31:00.046: INFO: (16) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 6.591369ms) +Jun 18 10:31:00.046: INFO: (16) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.96271ms) +Jun 18 10:31:00.046: INFO: (16) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 5.853599ms) +Jun 18 10:31:00.047: INFO: (16) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.905ms) +Jun 18 10:31:00.047: INFO: (16) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.83168ms) +Jun 18 10:31:00.047: INFO: (16) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 6.13995ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 6.097654ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.176561ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 6.330484ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.189721ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 6.270638ms) +Jun 18 10:31:00.056: INFO: (17) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: test (200; 5.729291ms) +Jun 18 10:31:00.064: INFO: (18) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 5.554964ms) +Jun 18 10:31:00.064: INFO: (18) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 5.725782ms) +Jun 18 10:31:00.065: INFO: (18) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: ... (200; 7.04773ms) +Jun 18 10:31:00.065: INFO: (18) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 7.079049ms) +Jun 18 10:31:00.065: INFO: (18) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 6.983857ms) +Jun 18 10:31:00.065: INFO: (18) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 7.207192ms) +Jun 18 10:31:00.065: INFO: (18) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 7.114267ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname1/proxy/: tls baz (200; 8.430127ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname1/proxy/: foo (200; 8.677217ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname2/proxy/: bar (200; 8.398927ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/http:proxy-service-x4js8:portname2/proxy/: bar (200; 8.396217ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/proxy-service-x4js8:portname1/proxy/: foo (200; 8.340613ms) +Jun 18 10:31:00.067: INFO: (18) /api/v1/namespaces/proxy-3698/services/https:proxy-service-x4js8:tlsportname2/proxy/: tls qux (200; 8.475861ms) +Jun 18 10:31:00.070: INFO: (19) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx/proxy/: test (200; 3.535238ms) +Jun 18 10:31:00.071: INFO: (19) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:1080/proxy/: ... (200; 4.419162ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 5.866837ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:460/proxy/: tls baz (200; 6.0503ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:160/proxy/: foo (200; 6.066143ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:462/proxy/: tls qux (200; 6.076488ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.255475ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/http:proxy-service-x4js8-dpbtx:162/proxy/: bar (200; 6.242436ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/proxy-service-x4js8-dpbtx:1080/proxy/: test<... (200; 6.375959ms) +Jun 18 10:31:00.073: INFO: (19) /api/v1/namespaces/proxy-3698/pods/https:proxy-service-x4js8-dpbtx:443/proxy/: >> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl label + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1108 +STEP: creating the pod +Jun 18 10:31:13.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-1936' +Jun 18 10:31:13.985: INFO: stderr: "" +Jun 18 10:31:13.985: INFO: stdout: "pod/pause created\n" +Jun 18 10:31:13.985: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jun 18 10:31:13.986: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1936" to be "running and ready" +Jun 18 10:31:13.995: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.888656ms +Jun 18 10:31:16.000: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.014121183s +Jun 18 10:31:16.000: INFO: Pod "pause" satisfied condition "running and ready" +Jun 18 10:31:16.000: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: adding the label testing-label with value testing-label-value to a pod +Jun 18 10:31:16.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 label pods pause testing-label=testing-label-value --namespace=kubectl-1936' +Jun 18 10:31:16.071: INFO: stderr: "" +Jun 18 10:31:16.071: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Jun 18 10:31:16.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pod pause -L testing-label --namespace=kubectl-1936' +Jun 18 10:31:16.138: INFO: stderr: "" +Jun 18 10:31:16.138: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" +STEP: removing the label testing-label of a pod +Jun 18 10:31:16.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 label pods pause testing-label- --namespace=kubectl-1936' +Jun 18 10:31:16.210: INFO: stderr: "" +Jun 18 10:31:16.210: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Jun 18 10:31:16.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pod pause -L testing-label --namespace=kubectl-1936' +Jun 18 10:31:16.273: INFO: stderr: "" +Jun 18 10:31:16.273: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" +[AfterEach] [k8s.io] Kubectl label + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1115 +STEP: using delete to clean up resources +Jun 18 10:31:16.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-1936' +Jun 18 10:31:16.349: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 10:31:16.349: INFO: stdout: "pod \"pause\" force deleted\n" +Jun 18 10:31:16.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get rc,svc -l name=pause --no-headers --namespace=kubectl-1936' +Jun 18 10:31:16.429: INFO: stderr: "No resources found.\n" +Jun 18 10:31:16.429: INFO: stdout: "" +Jun 18 10:31:16.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -l name=pause --namespace=kubectl-1936 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 18 10:31:16.495: INFO: stderr: "" +Jun 18 10:31:16.495: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:31:16.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1936" for this suite. +Jun 18 10:31:22.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:31:22.634: INFO: namespace kubectl-1936 deletion completed in 6.13468847s + +• [SLOW TEST:8.870 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl label + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should update the label on a resource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:31:22.634: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating replication controller my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7 +Jun 18 10:31:22.680: INFO: Pod name my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7: Found 0 pods out of 1 +Jun 18 10:31:27.684: INFO: Pod name my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7: Found 1 pods out of 1 +Jun 18 10:31:27.684: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7" are running +Jun 18 10:31:27.687: INFO: Pod "my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7-m8mkr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 10:31:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 10:31:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 10:31:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 10:31:22 +0000 UTC Reason: Message:}]) +Jun 18 10:31:27.687: INFO: Trying to dial the pod +Jun 18 10:31:32.698: INFO: Controller my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7: Got expected result from replica 1 [my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7-m8mkr]: "my-hostname-basic-37bb7871-91b4-11e9-8aef-6ab77b36fff7-m8mkr", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:31:32.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9959" for this suite. +Jun 18 10:31:38.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:31:38.831: INFO: namespace replication-controller-9959 deletion completed in 6.129180501s + +• [SLOW TEST:16.197 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:31:38.831: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +W0618 10:31:48.898419 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 10:31:48.898: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:31:48.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7301" for this suite. +Jun 18 10:31:54.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:31:55.020: INFO: namespace gc-7301 deletion completed in 6.118378967s + +• [SLOW TEST:16.189 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run rc + should create an rc from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:31:55.020: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run rc + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 +[It] should create an rc from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 10:31:55.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5615' +Jun 18 10:31:55.139: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 18 10:31:55.139: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created +STEP: confirm that you can get logs from an rc +Jun 18 10:31:55.156: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-nrjsv] +Jun 18 10:31:55.156: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-nrjsv" in namespace "kubectl-5615" to be "running and ready" +Jun 18 10:31:55.170: INFO: Pod "e2e-test-nginx-rc-nrjsv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.745233ms +Jun 18 10:31:57.174: INFO: Pod "e2e-test-nginx-rc-nrjsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017460778s +Jun 18 10:31:59.178: INFO: Pod "e2e-test-nginx-rc-nrjsv": Phase="Running", Reason="", readiness=true. Elapsed: 4.021325739s +Jun 18 10:31:59.178: INFO: Pod "e2e-test-nginx-rc-nrjsv" satisfied condition "running and ready" +Jun 18 10:31:59.178: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-nrjsv] +Jun 18 10:31:59.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 logs rc/e2e-test-nginx-rc --namespace=kubectl-5615' +Jun 18 10:31:59.258: INFO: stderr: "" +Jun 18 10:31:59.258: INFO: stdout: "" +[AfterEach] [k8s.io] Kubectl run rc + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 +Jun 18 10:31:59.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete rc e2e-test-nginx-rc --namespace=kubectl-5615' +Jun 18 10:31:59.328: INFO: stderr: "" +Jun 18 10:31:59.328: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:31:59.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5615" for this suite. +Jun 18 10:32:05.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:32:05.452: INFO: namespace kubectl-5615 deletion completed in 6.119818434s + +• [SLOW TEST:10.432 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run rc + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create an rc from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:32:05.452: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 18 10:32:05.495: INFO: Waiting up to 5m0s for pod "pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7" in namespace "emptydir-890" to be "success or failure" +Jun 18 10:32:05.498: INFO: Pod "pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868349ms +Jun 18 10:32:07.502: INFO: Pod "pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007775564s +STEP: Saw pod success +Jun 18 10:32:07.502: INFO: Pod "pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:32:07.505: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:32:07.527: INFO: Waiting for pod pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:32:07.530: INFO: Pod pod-5140c7cd-91b4-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:32:07.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-890" for this suite. +Jun 18 10:32:13.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:32:13.655: INFO: namespace emptydir-890 deletion completed in 6.12171211s + +• [SLOW TEST:8.203 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:32:13.655: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-578 +[It] Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating stateful set ss in namespace statefulset-578 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-578 +Jun 18 10:32:13.707: INFO: Found 0 stateful pods, waiting for 1 +Jun 18 10:32:23.712: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Jun 18 10:32:23.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 10:32:23.918: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 10:32:23.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 10:32:23.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 10:32:23.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jun 18 10:32:33.926: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 10:32:33.926: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 10:32:33.941: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:33.941: INFO: ss-0 ip-172-26-16-178 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:33.941: INFO: +Jun 18 10:32:33.941: INFO: StatefulSet ss has not reached scale 3, at 1 +Jun 18 10:32:34.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996855814s +Jun 18 10:32:35.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992940194s +Jun 18 10:32:36.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988612382s +Jun 18 10:32:37.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98409478s +Jun 18 10:32:38.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980015316s +Jun 18 10:32:39.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975831912s +Jun 18 10:32:40.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971808755s +Jun 18 10:32:41.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967469331s +Jun 18 10:32:42.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 963.427778ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-578 +Jun 18 10:32:43.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:32:44.191: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 10:32:44.191: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 10:32:44.191: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 10:32:44.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:32:44.426: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 18 10:32:44.426: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 10:32:44.426: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 10:32:44.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:32:44.629: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 18 10:32:44.630: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 10:32:44.630: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 10:32:44.633: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:32:44.633: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:32:44.633: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Jun 18 10:32:44.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 10:32:44.839: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 10:32:44.839: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 10:32:44.839: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 10:32:44.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 10:32:45.071: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 10:32:45.071: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 10:32:45.071: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 10:32:45.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 10:32:45.297: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 10:32:45.297: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 10:32:45.297: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 10:32:45.297: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 10:32:45.300: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Jun 18 10:32:55.308: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 10:32:55.308: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 10:32:55.308: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 10:32:55.318: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:55.318: INFO: ss-0 ip-172-26-16-178 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:55.318: INFO: ss-1 ip-172-26-17-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:55.318: INFO: ss-2 ip-172-26-30-38 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:55.318: INFO: +Jun 18 10:32:55.318: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:32:56.322: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:56.322: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:56.322: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:56.322: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:56.322: INFO: +Jun 18 10:32:56.322: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:32:57.327: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:57.327: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:57.327: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:57.327: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:57.327: INFO: +Jun 18 10:32:57.327: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:32:58.331: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:58.331: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:58.331: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:58.331: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:58.331: INFO: +Jun 18 10:32:58.331: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:32:59.335: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:32:59.335: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:32:59.335: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:59.335: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:32:59.336: INFO: +Jun 18 10:32:59.336: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:33:00.339: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:33:00.339: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:33:00.339: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:00.339: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:00.339: INFO: +Jun 18 10:33:00.339: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:33:01.347: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:33:01.347: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:33:01.347: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:01.347: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:01.347: INFO: +Jun 18 10:33:01.347: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:33:02.351: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:33:02.351: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:33:02.351: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:02.351: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:02.351: INFO: +Jun 18 10:33:02.351: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:33:03.355: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:33:03.355: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:33:03.355: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:03.355: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:03.355: INFO: +Jun 18 10:33:03.355: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 18 10:33:04.359: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 18 10:33:04.359: INFO: ss-0 ip-172-26-16-178 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:13 +0000 UTC }] +Jun 18 10:33:04.359: INFO: ss-1 ip-172-26-17-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:04.359: INFO: ss-2 ip-172-26-30-38 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:32:33 +0000 UTC }] +Jun 18 10:33:04.359: INFO: +Jun 18 10:33:04.359: INFO: StatefulSet ss has not reached scale 0, at 3 +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-578 +Jun 18 10:33:05.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:05.453: INFO: rc: 1 +Jun 18 10:33:05.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") + [] 0xc00290c930 exit status 1 true [0xc002e72178 0xc002e72190 0xc002e721a8] [0xc002e72178 0xc002e72190 0xc002e721a8] [0xc002e72188 0xc002e721a0] [0x9c00a0 0x9c00a0] 0xc003040c00 }: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("nginx") + +error: +exit status 1 + +Jun 18 10:33:15.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:15.513: INFO: rc: 1 +Jun 18 10:33:15.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002582750 exit status 1 true [0xc001bb8ed8 0xc001bb8ef0 0xc001bb8f08] [0xc001bb8ed8 0xc001bb8ef0 0xc001bb8f08] [0xc001bb8ee8 0xc001bb8f00] [0x9c00a0 0x9c00a0] 0xc002c786c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:33:25.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:25.573: INFO: rc: 1 +Jun 18 10:33:25.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b8330 exit status 1 true [0xc000011418 0xc000011460 0xc000011530] [0xc000011418 0xc000011460 0xc000011530] [0xc000011448 0xc0000114e8] [0x9c00a0 0x9c00a0] 0xc002c84840 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:33:35.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:35.633: INFO: rc: 1 +Jun 18 10:33:35.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00285e360 exit status 1 true [0xc001710038 0xc001710338 0xc0017103d0] [0xc001710038 0xc001710338 0xc0017103d0] [0xc001710328 0xc0017103c0] [0x9c00a0 0x9c00a0] 0xc002b4c540 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:33:45.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:45.692: INFO: rc: 1 +Jun 18 10:33:45.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002760330 exit status 1 true [0xc00018e2f0 0xc00018eb78 0xc00018f580] [0xc00018e2f0 0xc00018eb78 0xc00018f580] [0xc00018e8f8 0xc00018f2c8] [0x9c00a0 0x9c00a0] 0xc0027ce900 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:33:55.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:33:55.750: INFO: rc: 1 +Jun 18 10:33:55.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b86c0 exit status 1 true [0xc000011558 0xc0000115d0 0xc000011648] [0xc000011558 0xc0000115d0 0xc000011648] [0xc000011598 0xc000011640] [0x9c00a0 0x9c00a0] 0xc002c84de0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:05.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:05.809: INFO: rc: 1 +Jun 18 10:34:05.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b8a20 exit status 1 true [0xc000011650 0xc000011710 0xc000011820] [0xc000011650 0xc000011710 0xc000011820] [0xc0000116d0 0xc0000117e0] [0x9c00a0 0x9c00a0] 0xc002c851a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:15.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:15.875: INFO: rc: 1 +Jun 18 10:34:15.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0027606f0 exit status 1 true [0xc00018f710 0xc00018fde8 0xc002662020] [0xc00018f710 0xc00018fde8 0xc002662020] [0xc00018fbe0 0xc002662000] [0x9c00a0 0x9c00a0] 0xc0027cf020 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:25.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:25.939: INFO: rc: 1 +Jun 18 10:34:25.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b8db0 exit status 1 true [0xc000011830 0xc000011c10 0xc000011c40] [0xc000011830 0xc000011c10 0xc000011c40] [0xc000011bf0 0xc000011c30] [0x9c00a0 0x9c00a0] 0xc002c85560 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:35.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:36.005: INFO: rc: 1 +Jun 18 10:34:36.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00285e750 exit status 1 true [0xc0017103e8 0xc0017104d0 0xc001710518] [0xc0017103e8 0xc0017104d0 0xc001710518] [0xc001710440 0xc001710510] [0x9c00a0 0x9c00a0] 0xc002b4cc60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:46.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:46.064: INFO: rc: 1 +Jun 18 10:34:46.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b9170 exit status 1 true [0xc000011c80 0xc000011cd8 0xc000011d10] [0xc000011c80 0xc000011cd8 0xc000011d10] [0xc000011cc8 0xc000011cf0] [0x9c00a0 0x9c00a0] 0xc002c858c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:34:56.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:34:56.121: INFO: rc: 1 +Jun 18 10:34:56.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00285eae0 exit status 1 true [0xc001710578 0xc001710628 0xc001710770] [0xc001710578 0xc001710628 0xc001710770] [0xc001710610 0xc0017106f8] [0x9c00a0 0x9c00a0] 0xc002b4d380 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:06.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:06.180: INFO: rc: 1 +Jun 18 10:35:06.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00285ee70 exit status 1 true [0xc001710808 0xc001710b28 0xc001710d78] [0xc001710808 0xc001710b28 0xc001710d78] [0xc0017109a8 0xc001710c20] [0x9c00a0 0x9c00a0] 0xc002ff00c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:16.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:16.242: INFO: rc: 1 +Jun 18 10:35:16.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b94d0 exit status 1 true [0xc000011d38 0xc000011d80 0xc000011eb8] [0xc000011d38 0xc000011d80 0xc000011eb8] [0xc000011d70 0xc000011e90] [0x9c00a0 0x9c00a0] 0xc002c85c20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:26.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:26.301: INFO: rc: 1 +Jun 18 10:35:26.301: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001fa8330 exit status 1 true [0xc00018e5f0 0xc00018f240 0xc00018f710] [0xc00018e5f0 0xc00018f240 0xc00018f710] [0xc00018eb78 0xc00018f580] [0x9c00a0 0x9c00a0] 0xc002b4c540 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:36.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:36.360: INFO: rc: 1 +Jun 18 10:35:36.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001fa86c0 exit status 1 true [0xc00018f9c0 0xc00018ff18 0xc001710328] [0xc00018f9c0 0xc00018ff18 0xc001710328] [0xc00018fde8 0xc001710048] [0x9c00a0 0x9c00a0] 0xc002b4cc60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:46.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:46.418: INFO: rc: 1 +Jun 18 10:35:46.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002760360 exit status 1 true [0xc002662000 0xc002662048 0xc0026620c8] [0xc002662000 0xc002662048 0xc0026620c8] [0xc002662040 0xc002662090] [0x9c00a0 0x9c00a0] 0xc002ff02a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:35:56.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:35:56.476: INFO: rc: 1 +Jun 18 10:35:56.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001fa8a50 exit status 1 true [0xc001710338 0xc0017103d0 0xc001710440] [0xc001710338 0xc0017103d0 0xc001710440] [0xc0017103c0 0xc0017103f8] [0x9c00a0 0x9c00a0] 0xc002b4d380 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:06.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:06.535: INFO: rc: 1 +Jun 18 10:36:06.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0027606c0 exit status 1 true [0xc0026620e8 0xc002662118 0xc002662178] [0xc0026620e8 0xc002662118 0xc002662178] [0xc002662108 0xc002662168] [0x9c00a0 0x9c00a0] 0xc002ff0600 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:16.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:16.594: INFO: rc: 1 +Jun 18 10:36:16.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002760a80 exit status 1 true [0xc002662188 0xc0026621a8 0xc0026621d0] [0xc002662188 0xc0026621a8 0xc0026621d0] [0xc0026621a0 0xc0026621c0] [0x9c00a0 0x9c00a0] 0xc002ff0b40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:26.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:26.655: INFO: rc: 1 +Jun 18 10:36:26.655: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002760de0 exit status 1 true [0xc0026621e0 0xc002662228 0xc002662260] [0xc0026621e0 0xc002662228 0xc002662260] [0xc002662218 0xc002662238] [0x9c00a0 0x9c00a0] 0xc002ff1140 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:36.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:36.714: INFO: rc: 1 +Jun 18 10:36:36.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00285e3c0 exit status 1 true [0xc0000113d0 0xc000011448 0xc0000114e8] [0xc0000113d0 0xc000011448 0xc0000114e8] [0xc000011428 0xc0000114c8] [0x9c00a0 0x9c00a0] 0xc0027ce900 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:46.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:46.775: INFO: rc: 1 +Jun 18 10:36:46.775: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b8390 exit status 1 true [0xc0000cc530 0xc0000ccda0 0xc0000cd460] [0xc0000cc530 0xc0000ccda0 0xc0000cd460] [0xc0000ccd00 0xc0000cd400] [0x9c00a0 0x9c00a0] 0xc002c84840 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:36:56.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:36:56.834: INFO: rc: 1 +Jun 18 10:36:56.834: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0027611a0 exit status 1 true [0xc002662288 0xc0026622c8 0xc002662328] [0xc002662288 0xc0026622c8 0xc002662328] [0xc0026622c0 0xc002662300] [0x9c00a0 0x9c00a0] 0xc002ff17a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:06.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:06.892: INFO: rc: 1 +Jun 18 10:37:06.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002761500 exit status 1 true [0xc002662350 0xc0026623c8 0xc0026623e8] [0xc002662350 0xc0026623c8 0xc0026623e8] [0xc0026623a0 0xc0026623e0] [0x9c00a0 0x9c00a0] 0xc002ff1bc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:16.950: INFO: rc: 1 +Jun 18 10:37:16.950: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002761890 exit status 1 true [0xc002662420 0xc002662458 0xc002662478] [0xc002662420 0xc002662458 0xc002662478] [0xc002662448 0xc002662470] [0x9c00a0 0x9c00a0] 0xc002ff1f20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:26.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:27.013: INFO: rc: 1 +Jun 18 10:37:27.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001fa8360 exit status 1 true [0xc00018e5f0 0xc00018f240 0xc00018f710] [0xc00018e5f0 0xc00018f240 0xc00018f710] [0xc00018eb78 0xc00018f580] [0x9c00a0 0x9c00a0] 0xc002b4c540 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:37.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:37.071: INFO: rc: 1 +Jun 18 10:37:37.071: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b8330 exit status 1 true [0xc001710038 0xc001710338 0xc0017103d0] [0xc001710038 0xc001710338 0xc0017103d0] [0xc001710328 0xc0017103c0] [0x9c00a0 0x9c00a0] 0xc002c84840 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:47.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:47.130: INFO: rc: 1 +Jun 18 10:37:47.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc0023b86f0 exit status 1 true [0xc0017103e8 0xc0017104d0 0xc001710518] [0xc0017103e8 0xc0017104d0 0xc001710518] [0xc001710440 0xc001710510] [0x9c00a0 0x9c00a0] 0xc002c84de0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:37:57.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:37:57.188: INFO: rc: 1 +Jun 18 10:37:57.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002760330 exit status 1 true [0xc0000cc530 0xc0000ccda0 0xc0000cd460] [0xc0000cc530 0xc0000ccda0 0xc0000cd460] [0xc0000ccd00 0xc0000cd400] [0x9c00a0 0x9c00a0] 0xc002ff02a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 18 10:38:07.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 10:38:07.249: INFO: rc: 1 +Jun 18 10:38:07.249: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: +Jun 18 10:38:07.249: INFO: Scaling statefulset ss to 0 +Jun 18 10:38:07.259: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 18 10:38:07.262: INFO: Deleting all statefulset in ns statefulset-578 +Jun 18 10:38:07.265: INFO: Scaling statefulset ss to 0 +Jun 18 10:38:07.273: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 10:38:07.277: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:38:07.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-578" for this suite. +Jun 18 10:38:13.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:38:13.415: INFO: namespace statefulset-578 deletion completed in 6.122171673s + +• [SLOW TEST:359.760 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:38:13.415: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +[It] should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating server pod server in namespace prestop-7260 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-7260 +STEP: Deleting pre-stop pod +Jun 18 10:38:24.499: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:38:24.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-7260" for this suite. +Jun 18 10:39:02.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:39:02.646: INFO: namespace prestop-7260 deletion completed in 38.135025939s + +• [SLOW TEST:49.231 seconds] +[k8s.io] [sig-node] PreStop +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:39:02.646: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Jun 18 10:39:02.690: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying the kubelet observed the termination notice +STEP: verifying pod deletion was observed +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:39:17.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9162" for this suite. +Jun 18 10:39:23.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:39:23.761: INFO: namespace pods-9162 deletion completed in 6.125147632s + +• [SLOW TEST:21.115 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:39:23.762: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-map-5681ad90-91b5-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 10:39:23.809: INFO: Waiting up to 5m0s for pod "pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7" in namespace "secrets-4484" to be "success or failure" +Jun 18 10:39:23.814: INFO: Pod "pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.622901ms +Jun 18 10:39:25.819: INFO: Pod "pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009718642s +Jun 18 10:39:27.822: INFO: Pod "pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013403733s +STEP: Saw pod success +Jun 18 10:39:27.822: INFO: Pod "pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:39:27.825: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 10:39:27.849: INFO: Waiting for pod pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:39:27.853: INFO: Pod pod-secrets-5682b9a3-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:39:27.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4484" for this suite. +Jun 18 10:39:33.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:39:33.984: INFO: namespace secrets-4484 deletion completed in 6.127152076s + +• [SLOW TEST:10.222 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:39:33.984: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 18 10:39:34.020: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 18 10:39:34.026: INFO: Waiting for terminating namespaces to be deleted... +Jun 18 10:39:34.029: INFO: +Logging pods the kubelet thinks is on node ip-172-26-16-178 before test +Jun 18 10:39:34.036: INFO: rke-network-plugin-deploy-job-c76n7 from kube-system started at 2019-06-18 08:30:17 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container rke-network-plugin-pod ready: false, restart count 0 +Jun 18 10:39:34.036: INFO: canal-kwvpm from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container calico-node ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: coredns-86bc4b7c96-vms9l from kube-system started at 2019-06-18 08:30:28 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container coredns ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: nginx-ingress-controller-x7drh from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: kube-api-auth-9nzcl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-xvczp from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: Container systemd-logs ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: rke-ingress-controller-deploy-job-697mh from kube-system started at 2019-06-18 08:30:32 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container rke-ingress-controller-pod ready: false, restart count 0 +Jun 18 10:39:34.036: INFO: cattle-node-agent-pk4wc from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container agent ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: sonobuoy-e2e-job-10fdfd8dfec5439f from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container e2e ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: rke-coredns-addon-deploy-job-4b9ct from kube-system started at 2019-06-18 08:30:22 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container rke-coredns-addon-pod ready: false, restart count 0 +Jun 18 10:39:34.036: INFO: rke-metrics-addon-deploy-job-f4q28 from kube-system started at 2019-06-18 08:30:27 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container rke-metrics-addon-pod ready: false, restart count 0 +Jun 18 10:39:34.036: INFO: coredns-autoscaler-5d5d49b8ff-7v6zn from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.036: INFO: Container autoscaler ready: true, restart count 0 +Jun 18 10:39:34.036: INFO: +Logging pods the kubelet thinks is on node ip-172-26-17-1 before test +Jun 18 10:39:34.043: INFO: kube-api-auth-6mld7 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: cattle-cluster-agent-6b589fd864-hhp9v from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container cluster-register ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: cattle-node-agent-8k6f2 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container agent ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-j5st8 from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: Container systemd-logs ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: nginx-ingress-controller-98bp5 from ingress-nginx started at 2019-06-18 08:30:37 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-18 10:27:17 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: canal-9q452 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.043: INFO: Container calico-node ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 10:39:34.043: INFO: +Logging pods the kubelet thinks is on node ip-172-26-30-38 before test +Jun 18 10:39:34.048: INFO: default-http-backend-5954bd5d8c-t6btz from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container default-http-backend ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: cattle-node-agent-pk6fl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container agent ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: canal-wnpt7 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container calico-node ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: metrics-server-7f6bd4c888-n4w2m from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container metrics-server ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: nginx-ingress-controller-nqmmq from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: kube-api-auth-j5sk5 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-mmnqj from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 10:39:34.048: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 10:39:34.048: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.15a944c8906e9c6b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:39:35.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-6868" for this suite. +Jun 18 10:39:41.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:39:41.201: INFO: namespace sched-pred-6868 deletion completed in 6.126644168s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:7.217 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:39:41.201: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:39:41.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7" in namespace "downward-api-8745" to be "success or failure" +Jun 18 10:39:41.255: INFO: Pod "downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964542ms +Jun 18 10:39:43.259: INFO: Pod "downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00710218s +STEP: Saw pod success +Jun 18 10:39:43.259: INFO: Pod "downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:39:43.262: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:39:43.285: INFO: Waiting for pod downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:39:43.288: INFO: Pod downwardapi-volume-60e7ca1d-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:39:43.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8745" for this suite. +Jun 18 10:39:49.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:39:49.417: INFO: namespace downward-api-8745 deletion completed in 6.123489424s + +• [SLOW TEST:8.216 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:39:49.418: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:39:49.462: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Jun 18 10:39:54.466: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 18 10:39:54.466: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Jun 18 10:39:56.470: INFO: Creating deployment "test-rollover-deployment" +Jun 18 10:39:56.477: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Jun 18 10:39:58.484: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Jun 18 10:39:58.489: INFO: Ensure that both replica sets have 1 created replica +Jun 18 10:39:58.495: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Jun 18 10:39:58.504: INFO: Updating deployment test-rollover-deployment +Jun 18 10:39:58.504: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Jun 18 10:40:00.511: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Jun 18 10:40:00.517: INFO: Make sure deployment "test-rollover-deployment" is complete +Jun 18 10:40:00.524: INFO: all replica sets need to contain the pod-template-hash label +Jun 18 10:40:00.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451200, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 10:40:02.531: INFO: all replica sets need to contain the pod-template-hash label +Jun 18 10:40:02.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451200, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 10:40:04.531: INFO: all replica sets need to contain the pod-template-hash label +Jun 18 10:40:04.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451200, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 10:40:06.531: INFO: all replica sets need to contain the pod-template-hash label +Jun 18 10:40:06.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451200, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 10:40:08.531: INFO: all replica sets need to contain the pod-template-hash label +Jun 18 10:40:08.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451200, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696451196, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 10:40:10.532: INFO: +Jun 18 10:40:10.532: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 18 10:40:10.540: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1284,SelfLink:/apis/apps/v1/namespaces/deployment-1284/deployments/test-rollover-deployment,UID:69fbb7b3-91b5-11e9-8d87-0a902858a792,ResourceVersion:29154,Generation:2,CreationTimestamp:2019-06-18 10:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-18 10:39:56 +0000 UTC 2019-06-18 10:39:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-18 10:40:10 +0000 UTC 2019-06-18 10:39:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-766b4d6c9d" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 18 10:40:10.544: INFO: New ReplicaSet "test-rollover-deployment-766b4d6c9d" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d,GenerateName:,Namespace:deployment-1284,SelfLink:/apis/apps/v1/namespaces/deployment-1284/replicasets/test-rollover-deployment-766b4d6c9d,UID:6b327179-91b5-11e9-8999-0a07e7e61ed8,ResourceVersion:29143,Generation:2,CreationTimestamp:2019-06-18 10:39:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69fbb7b3-91b5-11e9-8d87-0a902858a792 0xc0025ad0e7 0xc0025ad0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 18 10:40:10.544: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Jun 18 10:40:10.544: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1284,SelfLink:/apis/apps/v1/namespaces/deployment-1284/replicasets/test-rollover-controller,UID:65ccdc11-91b5-11e9-8d87-0a902858a792,ResourceVersion:29152,Generation:2,CreationTimestamp:2019-06-18 10:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69fbb7b3-91b5-11e9-8d87-0a902858a792 0xc0025acf37 0xc0025acf38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 10:40:10.544: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6455657675,GenerateName:,Namespace:deployment-1284,SelfLink:/apis/apps/v1/namespaces/deployment-1284/replicasets/test-rollover-deployment-6455657675,UID:69fe5035-91b5-11e9-8999-0a07e7e61ed8,ResourceVersion:29111,Generation:2,CreationTimestamp:2019-06-18 10:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69fbb7b3-91b5-11e9-8d87-0a902858a792 0xc0025ad007 0xc0025ad008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 10:40:10.547: INFO: Pod "test-rollover-deployment-766b4d6c9d-jzxp6" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d-jzxp6,GenerateName:test-rollover-deployment-766b4d6c9d-,Namespace:deployment-1284,SelfLink:/api/v1/namespaces/deployment-1284/pods/test-rollover-deployment-766b4d6c9d-jzxp6,UID:6b36b563-91b5-11e9-8999-0a07e7e61ed8,ResourceVersion:29123,Generation:0,CreationTimestamp:2019-06-18 10:39:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.123/32,},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-766b4d6c9d 6b327179-91b5-11e9-8999-0a07e7e61ed8 0xc0025adc17 0xc0025adc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bql74 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bql74,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bql74 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025adc90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025adcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:39:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:39:58 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.123,StartTime:2019-06-18 10:39:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-18 10:39:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://62862ad7d78e56cbe150241253518c787ee848bf10ce794321c6eed105e8b89c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:10.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1284" for this suite. +Jun 18 10:40:16.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:40:16.674: INFO: namespace deployment-1284 deletion completed in 6.123299882s + +• [SLOW TEST:27.256 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:40:16.675: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:40:16.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7" in namespace "downward-api-7076" to be "success or failure" +Jun 18 10:40:16.725: INFO: Pod "downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.662573ms +Jun 18 10:40:18.729: INFO: Pod "downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00997997s +STEP: Saw pod success +Jun 18 10:40:18.730: INFO: Pod "downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:40:18.732: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:40:18.754: INFO: Waiting for pod downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:40:18.761: INFO: Pod downwardapi-volume-760ba5db-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7076" for this suite. +Jun 18 10:40:24.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:40:24.891: INFO: namespace downward-api-7076 deletion completed in 6.125499418s + +• [SLOW TEST:8.216 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:40:24.891: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:40:24.934: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jun 18 10:40:29.938: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 18 10:40:29.938: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 18 10:40:29.956: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2980,SelfLink:/apis/apps/v1/namespaces/deployment-2980/deployments/test-cleanup-deployment,UID:7def2824-91b5-11e9-8d87-0a902858a792,ResourceVersion:29280,Generation:1,CreationTimestamp:2019-06-18 10:40:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 18 10:40:29.960: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Jun 18 10:40:29.960: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jun 18 10:40:29.960: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2980,SelfLink:/apis/apps/v1/namespaces/deployment-2980/replicasets/test-cleanup-controller,UID:7af17815-91b5-11e9-8d87-0a902858a792,ResourceVersion:29281,Generation:1,CreationTimestamp:2019-06-18 10:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7def2824-91b5-11e9-8d87-0a902858a792 0xc002ec2807 0xc002ec2808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 18 10:40:29.964: INFO: Pod "test-cleanup-controller-tngxk" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tngxk,GenerateName:test-cleanup-controller-,Namespace:deployment-2980,SelfLink:/api/v1/namespaces/deployment-2980/pods/test-cleanup-controller-tngxk,UID:7af3697e-91b5-11e9-8999-0a07e7e61ed8,ResourceVersion:29271,Generation:0,CreationTimestamp:2019-06-18 10:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.137/32,},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7af17815-91b5-11e9-8d87-0a902858a792 0xc002ec2d67 0xc002ec2d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pmsgd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pmsgd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pmsgd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ec2de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ec2e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 10:40:24 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.137,StartTime:2019-06-18 10:40:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 10:40:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a49232a0384f497641aabd5c0f830a67f9f23fd3f974f321ae153258b377eb56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:29.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2980" for this suite. +Jun 18 10:40:35.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:40:36.101: INFO: namespace deployment-2980 deletion completed in 6.133495525s + +• [SLOW TEST:11.210 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:40:36.101: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting the proxy server +Jun 18 10:40:36.140: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-675335780 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:36.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1133" for this suite. +Jun 18 10:40:42.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:40:42.331: INFO: namespace kubectl-1133 deletion completed in 6.130824777s + +• [SLOW TEST:6.230 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Proxy server + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:40:42.331: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 18 10:40:42.376: INFO: Waiting up to 5m0s for pod "pod-85568634-91b5-11e9-8aef-6ab77b36fff7" in namespace "emptydir-8939" to be "success or failure" +Jun 18 10:40:42.386: INFO: Pod "pod-85568634-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103418ms +Jun 18 10:40:44.390: INFO: Pod "pod-85568634-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014209584s +STEP: Saw pod success +Jun 18 10:40:44.390: INFO: Pod "pod-85568634-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:40:44.393: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-85568634-91b5-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:40:44.416: INFO: Waiting for pod pod-85568634-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:40:44.418: INFO: Pod pod-85568634-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:44.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8939" for this suite. +Jun 18 10:40:50.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:40:50.547: INFO: namespace emptydir-8939 deletion completed in 6.125450667s + +• [SLOW TEST:8.216 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:40:50.548: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:40:52.620: INFO: Waiting up to 5m0s for pod "client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7" in namespace "pods-4575" to be "success or failure" +Jun 18 10:40:52.626: INFO: Pod "client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278383ms +Jun 18 10:40:54.631: INFO: Pod "client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010373605s +STEP: Saw pod success +Jun 18 10:40:54.631: INFO: Pod "client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:40:54.633: INFO: Trying to get logs from node ip-172-26-30-38 pod client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7 container env3cont: +STEP: delete the pod +Jun 18 10:40:54.652: INFO: Waiting for pod client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:40:54.658: INFO: Pod client-envvars-8b728a94-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:40:54.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4575" for this suite. +Jun 18 10:41:40.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:41:40.786: INFO: namespace pods-4575 deletion completed in 46.124082803s + +• [SLOW TEST:50.239 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:41:40.786: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-a83030c2-91b5-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Updating configmap projected-configmap-test-upd-a83030c2-91b5-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:41:44.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5758" for this suite. +Jun 18 10:42:06.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:42:07.015: INFO: namespace projected-5758 deletion completed in 22.123368655s + +• [SLOW TEST:26.229 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:42:07.016: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:42:07.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7" in namespace "projected-4428" to be "success or failure" +Jun 18 10:42:07.071: INFO: Pod "downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644826ms +Jun 18 10:42:09.075: INFO: Pod "downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00872391s +STEP: Saw pod success +Jun 18 10:42:09.075: INFO: Pod "downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:42:09.078: INFO: Trying to get logs from node ip-172-26-16-178 pod downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:42:09.100: INFO: Waiting for pod downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:42:09.106: INFO: Pod downwardapi-volume-b7d124e1-91b5-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:42:09.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4428" for this suite. +Jun 18 10:42:15.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:42:15.233: INFO: namespace projected-4428 deletion completed in 6.12319224s + +• [SLOW TEST:8.218 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:42:15.234: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 18 10:42:19.324: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:19.327: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:21.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:21.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:23.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:23.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:25.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:25.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:27.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:27.336: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:29.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:29.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:31.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:31.332: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:33.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:33.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:35.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:35.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:37.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:37.331: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 18 10:42:39.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 18 10:42:39.331: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:42:39.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2369" for this suite. +Jun 18 10:43:01.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:43:01.513: INFO: namespace container-lifecycle-hook-2369 deletion completed in 22.174707574s + +• [SLOW TEST:46.279 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:43:01.513: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 18 10:43:01.583: INFO: Number of nodes with available pods: 0 +Jun 18 10:43:01.583: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 10:43:02.591: INFO: Number of nodes with available pods: 0 +Jun 18 10:43:02.591: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 10:43:03.591: INFO: Number of nodes with available pods: 3 +Jun 18 10:43:03.591: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Jun 18 10:43:03.611: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:03.611: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:04.619: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:04.619: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:05.619: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:05.619: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:06.619: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:06.619: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:07.619: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:07.619: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:08.620: INFO: Number of nodes with available pods: 2 +Jun 18 10:43:08.620: INFO: Node ip-172-26-30-38 is running more than one daemon pod +Jun 18 10:43:09.619: INFO: Number of nodes with available pods: 3 +Jun 18 10:43:09.619: INFO: Number of running nodes: 3, number of available pods: 3 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1648, will wait for the garbage collector to delete the pods +Jun 18 10:43:09.684: INFO: Deleting DaemonSet.extensions daemon-set took: 9.061673ms +Jun 18 10:43:10.185: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.253427ms +Jun 18 10:43:17.688: INFO: Number of nodes with available pods: 0 +Jun 18 10:43:17.688: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 18 10:43:17.692: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1648/daemonsets","resourceVersion":"29913"},"items":null} + +Jun 18 10:43:17.695: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1648/pods","resourceVersion":"29913"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:43:17.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1648" for this suite. +Jun 18 10:43:23.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:43:23.832: INFO: namespace daemonsets-1648 deletion completed in 6.120423807s + +• [SLOW TEST:22.319 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:43:23.832: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jun 18 10:43:27.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:27.911: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:29.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:29.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:31.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:31.914: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:33.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:33.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:35.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:35.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:37.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:37.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:39.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:39.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:41.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:41.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:43.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:43.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:45.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:45.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:47.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:47.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:49.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:49.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:51.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:51.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:53.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:53.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:55.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:55.915: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 18 10:43:57.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 18 10:43:57.915: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:43:57.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-6657" for this suite. +Jun 18 10:44:19.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:44:20.049: INFO: namespace container-lifecycle-hook-6657 deletion completed in 22.122659598s + +• [SLOW TEST:56.217 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:44:20.049: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 18 10:44:24.620: INFO: Successfully updated pod "pod-update-activedeadlineseconds-071c2690-91b6-11e9-8aef-6ab77b36fff7" +Jun 18 10:44:24.620: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-071c2690-91b6-11e9-8aef-6ab77b36fff7" in namespace "pods-5030" to be "terminated due to deadline exceeded" +Jun 18 10:44:24.623: INFO: Pod "pod-update-activedeadlineseconds-071c2690-91b6-11e9-8aef-6ab77b36fff7": Phase="Running", Reason="", readiness=true. Elapsed: 2.991685ms +Jun 18 10:44:26.627: INFO: Pod "pod-update-activedeadlineseconds-071c2690-91b6-11e9-8aef-6ab77b36fff7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00704729s +Jun 18 10:44:26.627: INFO: Pod "pod-update-activedeadlineseconds-071c2690-91b6-11e9-8aef-6ab77b36fff7" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:44:26.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5030" for this suite. +Jun 18 10:44:32.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:44:32.754: INFO: namespace pods-5030 deletion completed in 6.122848678s + +• [SLOW TEST:12.705 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:44:32.754: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-7245 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 18 10:44:32.806: INFO: Found 0 stateful pods, waiting for 3 +Jun 18 10:44:42.811: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:44:42.811: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:44:42.811: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 18 10:44:42.838: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Jun 18 10:44:52.878: INFO: Updating stateful set ss2 +Jun 18 10:44:52.891: INFO: Waiting for Pod statefulset-7245/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +STEP: Restoring Pods to the correct revision when they are deleted +Jun 18 10:45:02.944: INFO: Found 2 stateful pods, waiting for 3 +Jun 18 10:45:12.948: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:45:12.948: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 10:45:12.948: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Jun 18 10:45:12.973: INFO: Updating stateful set ss2 +Jun 18 10:45:12.982: INFO: Waiting for Pod statefulset-7245/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 18 10:45:23.010: INFO: Updating stateful set ss2 +Jun 18 10:45:23.019: INFO: Waiting for StatefulSet statefulset-7245/ss2 to complete update +Jun 18 10:45:23.019: INFO: Waiting for Pod statefulset-7245/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 18 10:45:33.027: INFO: Deleting all statefulset in ns statefulset-7245 +Jun 18 10:45:33.030: INFO: Scaling statefulset ss2 to 0 +Jun 18 10:46:03.045: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 10:46:03.048: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:46:03.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7245" for this suite. +Jun 18 10:46:09.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:46:09.200: INFO: namespace statefulset-7245 deletion completed in 6.135758922s + +• [SLOW TEST:96.446 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:46:09.201: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:46:09.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7" in namespace "downward-api-7941" to be "success or failure" +Jun 18 10:46:09.253: INFO: Pod "downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208023ms +Jun 18 10:46:11.256: INFO: Pod "downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007893666s +STEP: Saw pod success +Jun 18 10:46:11.256: INFO: Pod "downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:46:11.260: INFO: Trying to get logs from node ip-172-26-17-1 pod downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:46:11.280: INFO: Waiting for pod downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:46:11.283: INFO: Pod downwardapi-volume-482b6e2c-91b6-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:46:11.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7941" for this suite. +Jun 18 10:46:17.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:46:17.413: INFO: namespace downward-api-7941 deletion completed in 6.126975711s + +• [SLOW TEST:8.213 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:46:17.413: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-4d10548f-91b6-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 10:46:17.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7" in namespace "projected-4869" to be "success or failure" +Jun 18 10:46:17.469: INFO: Pod "pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157153ms +Jun 18 10:46:19.473: INFO: Pod "pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008346575s +Jun 18 10:46:21.477: INFO: Pod "pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012413082s +STEP: Saw pod success +Jun 18 10:46:21.477: INFO: Pod "pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:46:21.480: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 10:46:21.503: INFO: Waiting for pod pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:46:21.507: INFO: Pod pod-projected-configmaps-4d116151-91b6-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:46:21.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4869" for this suite. +Jun 18 10:46:27.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:46:27.635: INFO: namespace projected-4869 deletion completed in 6.123333068s + +• [SLOW TEST:10.222 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:46:27.635: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 18 10:46:30.204: INFO: Successfully updated pod "pod-update-53279156-91b6-11e9-8aef-6ab77b36fff7" +STEP: verifying the updated pod is in kubernetes +Jun 18 10:46:30.210: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:46:30.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3547" for this suite. +Jun 18 10:46:52.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:46:52.336: INFO: namespace pods-3547 deletion completed in 22.122086306s + +• [SLOW TEST:24.701 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:46:52.336: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 18 10:46:58.429: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:46:58.433: INFO: Pod pod-with-poststart-http-hook still exists +Jun 18 10:47:00.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:47:00.438: INFO: Pod pod-with-poststart-http-hook still exists +Jun 18 10:47:02.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:47:02.437: INFO: Pod pod-with-poststart-http-hook still exists +Jun 18 10:47:04.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:47:04.437: INFO: Pod pod-with-poststart-http-hook still exists +Jun 18 10:47:06.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:47:06.438: INFO: Pod pod-with-poststart-http-hook still exists +Jun 18 10:47:08.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 18 10:47:08.437: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:47:08.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-61" for this suite. +Jun 18 10:47:30.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:47:30.564: INFO: namespace container-lifecycle-hook-61 deletion completed in 22.123637577s + +• [SLOW TEST:38.228 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:47:30.565: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-5822 +Jun 18 10:47:32.617: INFO: Started pod liveness-exec in namespace container-probe-5822 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 18 10:47:32.620: INFO: Initial restart count of pod liveness-exec is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:51:33.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5822" for this suite. +Jun 18 10:51:39.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:51:39.287: INFO: namespace container-probe-5822 deletion completed in 6.139827247s + +• [SLOW TEST:248.723 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:51:39.287: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:52:39.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7350" for this suite. +Jun 18 10:53:01.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:53:01.469: INFO: namespace container-probe-7350 deletion completed in 22.127240294s + +• [SLOW TEST:82.181 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:53:01.469: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:53:03.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1445" for this suite. +Jun 18 10:53:49.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:53:49.671: INFO: namespace kubelet-test-1445 deletion completed in 46.121770613s + +• [SLOW TEST:48.202 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox Pod with hostAliases + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:53:49.671: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-759 +[It] Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-759 +STEP: Creating statefulset with conflicting port in namespace statefulset-759 +STEP: Waiting until pod test-pod will start running in namespace statefulset-759 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-759 +Jun 18 10:53:53.744: INFO: Observed stateful pod in namespace: statefulset-759, name: ss-0, uid: 5ab277a3-91b7-11e9-8999-0a07e7e61ed8, status phase: Pending. Waiting for statefulset controller to delete. +Jun 18 10:53:57.198: INFO: Observed stateful pod in namespace: statefulset-759, name: ss-0, uid: 5ab277a3-91b7-11e9-8999-0a07e7e61ed8, status phase: Failed. Waiting for statefulset controller to delete. +Jun 18 10:53:57.204: INFO: Observed stateful pod in namespace: statefulset-759, name: ss-0, uid: 5ab277a3-91b7-11e9-8999-0a07e7e61ed8, status phase: Failed. Waiting for statefulset controller to delete. +Jun 18 10:53:57.210: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-759 +STEP: Removing pod with conflicting port in namespace statefulset-759 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-759 and will be in running state +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 18 10:53:59.238: INFO: Deleting all statefulset in ns statefulset-759 +Jun 18 10:53:59.241: INFO: Scaling statefulset ss to 0 +Jun 18 10:54:09.255: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 10:54:09.259: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:54:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-759" for this suite. +Jun 18 10:54:15.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:54:15.404: INFO: namespace statefulset-759 deletion completed in 6.128106941s + +• [SLOW TEST:25.733 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:54:15.404: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test substitution in container's command +Jun 18 10:54:15.448: INFO: Waiting up to 5m0s for pod "var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7" in namespace "var-expansion-8372" to be "success or failure" +Jun 18 10:54:15.451: INFO: Pod "var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364289ms +Jun 18 10:54:17.455: INFO: Pod "var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00746264s +STEP: Saw pod success +Jun 18 10:54:17.455: INFO: Pod "var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:54:17.458: INFO: Trying to get logs from node ip-172-26-30-38 pod var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 10:54:17.478: INFO: Waiting for pod var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:54:17.485: INFO: Pod var-expansion-69f778b5-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:54:17.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-8372" for this suite. +Jun 18 10:54:23.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:54:23.608: INFO: namespace var-expansion-8372 deletion completed in 6.118732898s + +• [SLOW TEST:8.204 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:54:23.608: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4655.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4655.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4655.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4655.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.108.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.108.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.108.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.108.163_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4655.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4655.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4655.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4655.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4655.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4655.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.108.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.108.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.108.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.108.163_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 18 10:54:33.689: INFO: Unable to read wheezy_udp@dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.693: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.696: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.723: INFO: Unable to read jessie_udp@dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local from pod dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7: the server could not find the requested resource (get pods dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7) +Jun 18 10:54:33.754: INFO: Lookups using dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7 failed for: [wheezy_udp@dns-test-service.dns-4655.svc.cluster.local wheezy_tcp@dns-test-service.dns-4655.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local jessie_udp@dns-test-service.dns-4655.svc.cluster.local jessie_tcp@dns-test-service.dns-4655.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4655.svc.cluster.local] + +Jun 18 10:54:38.824: INFO: DNS probes using dns-4655/dns-test-6ede7303-91b7-11e9-8aef-6ab77b36fff7 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:54:38.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4655" for this suite. +Jun 18 10:54:44.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:54:45.011: INFO: namespace dns-4655 deletion completed in 6.127131037s + +• [SLOW TEST:21.403 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:54:45.011: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:54:45.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7" in namespace "downward-api-6616" to be "success or failure" +Jun 18 10:54:45.062: INFO: Pod "downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534733ms +Jun 18 10:54:47.066: INFO: Pod "downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008583109s +STEP: Saw pod success +Jun 18 10:54:47.066: INFO: Pod "downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:54:47.069: INFO: Trying to get logs from node ip-172-26-16-178 pod downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:54:47.091: INFO: Waiting for pod downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:54:47.095: INFO: Pod downwardapi-volume-7b9d94ff-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:54:47.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6616" for this suite. +Jun 18 10:54:53.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:54:53.225: INFO: namespace downward-api-6616 deletion completed in 6.125834632s + +• [SLOW TEST:8.214 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:54:53.225: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:54:55.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-1166" for this suite. +Jun 18 10:55:01.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:55:01.453: INFO: namespace emptydir-wrapper-1166 deletion completed in 6.125654202s + +• [SLOW TEST:8.228 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:55:01.453: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:55:01.488: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:55:03.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9317" for this suite. +Jun 18 10:55:47.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:55:47.660: INFO: namespace pods-9317 deletion completed in 44.12401072s + +• [SLOW TEST:46.207 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:55:47.660: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 10:55:47.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7" in namespace "projected-9613" to be "success or failure" +Jun 18 10:55:47.717: INFO: Pod "downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.975856ms +Jun 18 10:55:49.721: INFO: Pod "downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010035984s +STEP: Saw pod success +Jun 18 10:55:49.721: INFO: Pod "downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:55:49.724: INFO: Trying to get logs from node ip-172-26-16-178 pod downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 10:55:49.744: INFO: Waiting for pod downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:55:49.747: INFO: Pod downwardapi-volume-a0f5e7ae-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:55:49.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9613" for this suite. +Jun 18 10:55:55.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:55:55.882: INFO: namespace projected-9613 deletion completed in 6.129777105s + +• [SLOW TEST:8.221 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:55:55.882: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-a5dbb064-91b7-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 10:55:55.932: INFO: Waiting up to 5m0s for pod "pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7" in namespace "secrets-7768" to be "success or failure" +Jun 18 10:55:55.938: INFO: Pod "pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201295ms +Jun 18 10:55:57.943: INFO: Pod "pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010900544s +STEP: Saw pod success +Jun 18 10:55:57.943: INFO: Pod "pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:55:57.946: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 10:55:57.967: INFO: Waiting for pod pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:55:57.970: INFO: Pod pod-secrets-a5dcaa75-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:55:57.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7768" for this suite. +Jun 18 10:56:03.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:04.098: INFO: namespace secrets-7768 deletion completed in 6.122969627s + +• [SLOW TEST:8.217 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:04.099: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Jun 18 10:56:04.142: INFO: Pod name pod-release: Found 0 pods out of 1 +Jun 18 10:56:09.147: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:10.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-75" for this suite. +Jun 18 10:56:16.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:16.293: INFO: namespace replication-controller-75 deletion completed in 6.126466764s + +• [SLOW TEST:12.194 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:16.293: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a replication controller +Jun 18 10:56:16.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-1015' +Jun 18 10:56:16.653: INFO: stderr: "" +Jun 18 10:56:16.653: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 10:56:16.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1015' +Jun 18 10:56:16.728: INFO: stderr: "" +Jun 18 10:56:16.728: INFO: stdout: "update-demo-nautilus-9wgxs update-demo-nautilus-j56mb " +Jun 18 10:56:16.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-9wgxs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1015' +Jun 18 10:56:16.791: INFO: stderr: "" +Jun 18 10:56:16.791: INFO: stdout: "" +Jun 18 10:56:16.791: INFO: update-demo-nautilus-9wgxs is created but not running +Jun 18 10:56:21.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1015' +Jun 18 10:56:21.858: INFO: stderr: "" +Jun 18 10:56:21.858: INFO: stdout: "update-demo-nautilus-9wgxs update-demo-nautilus-j56mb " +Jun 18 10:56:21.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-9wgxs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1015' +Jun 18 10:56:21.921: INFO: stderr: "" +Jun 18 10:56:21.921: INFO: stdout: "true" +Jun 18 10:56:21.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-9wgxs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1015' +Jun 18 10:56:21.984: INFO: stderr: "" +Jun 18 10:56:21.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 10:56:21.984: INFO: validating pod update-demo-nautilus-9wgxs +Jun 18 10:56:22.010: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 10:56:22.010: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 10:56:22.010: INFO: update-demo-nautilus-9wgxs is verified up and running +Jun 18 10:56:22.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-j56mb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1015' +Jun 18 10:56:22.075: INFO: stderr: "" +Jun 18 10:56:22.075: INFO: stdout: "true" +Jun 18 10:56:22.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-j56mb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1015' +Jun 18 10:56:22.139: INFO: stderr: "" +Jun 18 10:56:22.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 10:56:22.139: INFO: validating pod update-demo-nautilus-j56mb +Jun 18 10:56:22.143: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 10:56:22.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 10:56:22.144: INFO: update-demo-nautilus-j56mb is verified up and running +STEP: using delete to clean up resources +Jun 18 10:56:22.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-1015' +Jun 18 10:56:22.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 10:56:22.213: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 18 10:56:22.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1015' +Jun 18 10:56:22.298: INFO: stderr: "No resources found.\n" +Jun 18 10:56:22.298: INFO: stdout: "" +Jun 18 10:56:22.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -l name=update-demo --namespace=kubectl-1015 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 18 10:56:22.370: INFO: stderr: "" +Jun 18 10:56:22.370: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1015" for this suite. +Jun 18 10:56:28.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:28.513: INFO: namespace kubectl-1015 deletion completed in 6.137985893s + +• [SLOW TEST:12.220 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:28.513: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: executing a command with run --rm and attach with stdin +Jun 18 10:56:28.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 --namespace=kubectl-6499 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' +Jun 18 10:56:30.918: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" +Jun 18 10:56:30.918: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" +STEP: verifying the job e2e-test-rm-busybox-job was deleted +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:32.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6499" for this suite. +Jun 18 10:56:38.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:39.050: INFO: namespace kubectl-6499 deletion completed in 6.121779978s + +• [SLOW TEST:10.538 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run --rm job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:39.051: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-bf969aac-91b7-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 10:56:39.101: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7" in namespace "projected-9066" to be "success or failure" +Jun 18 10:56:39.104: INFO: Pod "pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.076767ms +Jun 18 10:56:41.108: INFO: Pod "pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007029019s +STEP: Saw pod success +Jun 18 10:56:41.108: INFO: Pod "pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:56:41.111: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 10:56:41.139: INFO: Waiting for pod pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:56:41.142: INFO: Pod pod-projected-configmaps-bf9788f0-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:41.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9066" for this suite. +Jun 18 10:56:47.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:47.269: INFO: namespace projected-9066 deletion completed in 6.124030383s + +• [SLOW TEST:8.218 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:47.269: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-c47c9465-91b7-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 10:56:47.331: INFO: Waiting up to 5m0s for pod "pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7" in namespace "configmap-6355" to be "success or failure" +Jun 18 10:56:47.340: INFO: Pod "pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.478635ms +Jun 18 10:56:49.344: INFO: Pod "pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012575563s +STEP: Saw pod success +Jun 18 10:56:49.344: INFO: Pod "pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:56:49.347: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 10:56:49.367: INFO: Waiting for pod pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:56:49.370: INFO: Pod pod-configmaps-c47f51ff-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:49.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6355" for this suite. +Jun 18 10:56:55.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:56:55.498: INFO: namespace configmap-6355 deletion completed in 6.124352832s + +• [SLOW TEST:8.229 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:56:55.498: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating pod +Jun 18 10:56:57.556: INFO: Pod pod-hostip-c9643a88-91b7-11e9-8aef-6ab77b36fff7 has hostIP: 172.26.16.178 +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:56:57.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-994" for this suite. +Jun 18 10:57:19.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:57:19.702: INFO: namespace pods-994 deletion completed in 22.142282192s + +• [SLOW TEST:24.203 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:57:19.702: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl logs + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1190 +STEP: creating an rc +Jun 18 10:57:19.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-4774' +Jun 18 10:57:19.940: INFO: stderr: "" +Jun 18 10:57:19.940: INFO: stdout: "replicationcontroller/redis-master created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Waiting for Redis master to start. +Jun 18 10:57:20.945: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:57:20.945: INFO: Found 0 / 1 +Jun 18 10:57:21.944: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:57:21.944: INFO: Found 1 / 1 +Jun 18 10:57:21.944: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 18 10:57:21.947: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:57:21.947: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +STEP: checking for a matching strings +Jun 18 10:57:21.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 logs redis-master-882j5 redis-master --namespace=kubectl-4774' +Jun 18 10:57:22.025: INFO: stderr: "" +Jun 18 10:57:22.025: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Jun 10:57:21.040 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Jun 10:57:21.040 # Server started, Redis version 3.2.12\n1:M 18 Jun 10:57:21.040 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Jun 10:57:21.040 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log lines +Jun 18 10:57:22.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 log redis-master-882j5 redis-master --namespace=kubectl-4774 --tail=1' +Jun 18 10:57:22.103: INFO: stderr: "" +Jun 18 10:57:22.104: INFO: stdout: "1:M 18 Jun 10:57:21.040 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log bytes +Jun 18 10:57:22.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 log redis-master-882j5 redis-master --namespace=kubectl-4774 --limit-bytes=1' +Jun 18 10:57:22.180: INFO: stderr: "" +Jun 18 10:57:22.180: INFO: stdout: " " +STEP: exposing timestamps +Jun 18 10:57:22.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 log redis-master-882j5 redis-master --namespace=kubectl-4774 --tail=1 --timestamps' +Jun 18 10:57:22.264: INFO: stderr: "" +Jun 18 10:57:22.264: INFO: stdout: "2019-06-18T10:57:21.040429918Z 1:M 18 Jun 10:57:21.040 * The server is now ready to accept connections on port 6379\n" +STEP: restricting to a time range +Jun 18 10:57:24.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 log redis-master-882j5 redis-master --namespace=kubectl-4774 --since=1s' +Jun 18 10:57:24.851: INFO: stderr: "" +Jun 18 10:57:24.851: INFO: stdout: "" +Jun 18 10:57:24.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 log redis-master-882j5 redis-master --namespace=kubectl-4774 --since=24h' +Jun 18 10:57:24.927: INFO: stderr: "" +Jun 18 10:57:24.927: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Jun 10:57:21.040 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Jun 10:57:21.040 # Server started, Redis version 3.2.12\n1:M 18 Jun 10:57:21.040 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Jun 10:57:21.040 * The server is now ready to accept connections on port 6379\n" +[AfterEach] [k8s.io] Kubectl logs + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1196 +STEP: using delete to clean up resources +Jun 18 10:57:24.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-4774' +Jun 18 10:57:24.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 10:57:24.995: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" +Jun 18 10:57:24.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get rc,svc -l name=nginx --no-headers --namespace=kubectl-4774' +Jun 18 10:57:25.064: INFO: stderr: "No resources found.\n" +Jun 18 10:57:25.065: INFO: stdout: "" +Jun 18 10:57:25.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -l name=nginx --namespace=kubectl-4774 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 18 10:57:25.131: INFO: stderr: "" +Jun 18 10:57:25.131: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:57:25.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4774" for this suite. +Jun 18 10:57:47.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:57:47.260: INFO: namespace kubectl-4774 deletion completed in 22.123802661s + +• [SLOW TEST:27.559 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl logs + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:57:47.261: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 18 10:57:47.306: INFO: Waiting up to 5m0s for pod "pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7" in namespace "emptydir-6541" to be "success or failure" +Jun 18 10:57:47.312: INFO: Pod "pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.444611ms +Jun 18 10:57:49.316: INFO: Pod "pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010015281s +STEP: Saw pod success +Jun 18 10:57:49.317: INFO: Pod "pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:57:49.320: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:57:49.343: INFO: Waiting for pod pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:57:49.346: INFO: Pod pod-e83e9c2f-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:57:49.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6541" for this suite. +Jun 18 10:57:55.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:57:55.475: INFO: namespace emptydir-6541 deletion completed in 6.125715445s + +• [SLOW TEST:8.214 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:57:55.475: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test use defaults +Jun 18 10:57:55.539: INFO: Waiting up to 5m0s for pod "client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7" in namespace "containers-6401" to be "success or failure" +Jun 18 10:57:55.546: INFO: Pod "client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.362186ms +Jun 18 10:57:57.550: INFO: Pod "client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011462077s +Jun 18 10:57:59.555: INFO: Pod "client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015924224s +STEP: Saw pod success +Jun 18 10:57:59.555: INFO: Pod "client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:57:59.557: INFO: Trying to get logs from node ip-172-26-16-178 pod client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:57:59.579: INFO: Waiting for pod client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:57:59.582: INFO: Pod client-containers-ed244972-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:57:59.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6401" for this suite. +Jun 18 10:58:05.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:05.710: INFO: namespace containers-6401 deletion completed in 6.122269645s + +• [SLOW TEST:10.235 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:05.710: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:58:05.784: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f3404c78-91b7-11e9-8d87-0a902858a792", Controller:(*bool)(0xc002a0134a), BlockOwnerDeletion:(*bool)(0xc002a0134b)}} +Jun 18 10:58:05.794: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f33e1020-91b7-11e9-8d87-0a902858a792", Controller:(*bool)(0xc002a01506), BlockOwnerDeletion:(*bool)(0xc002a01507)}} +Jun 18 10:58:05.802: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f33f1998-91b7-11e9-8d87-0a902858a792", Controller:(*bool)(0xc002a15256), BlockOwnerDeletion:(*bool)(0xc002a15257)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:58:10.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5511" for this suite. +Jun 18 10:58:16.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:16.943: INFO: namespace gc-5511 deletion completed in 6.123635764s + +• [SLOW TEST:11.232 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:16.943: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on node default medium +Jun 18 10:58:16.987: INFO: Waiting up to 5m0s for pod "pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7" in namespace "emptydir-5980" to be "success or failure" +Jun 18 10:58:16.991: INFO: Pod "pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.62964ms +Jun 18 10:58:18.995: INFO: Pod "pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008652108s +Jun 18 10:58:20.999: INFO: Pod "pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012881801s +STEP: Saw pod success +Jun 18 10:58:20.999: INFO: Pod "pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:58:21.002: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 10:58:21.022: INFO: Waiting for pod pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:58:21.030: INFO: Pod pod-f9ef7258-91b7-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:58:21.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5980" for this suite. +Jun 18 10:58:27.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:27.157: INFO: namespace emptydir-5980 deletion completed in 6.123150317s + +• [SLOW TEST:10.214 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:27.158: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-0007bb4d-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 10:58:27.396: INFO: Waiting up to 5m0s for pod "pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7" in namespace "secrets-517" to be "success or failure" +Jun 18 10:58:27.422: INFO: Pod "pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.890559ms +Jun 18 10:58:29.426: INFO: Pod "pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029826741s +STEP: Saw pod success +Jun 18 10:58:29.426: INFO: Pod "pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:58:29.429: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 10:58:29.450: INFO: Waiting for pod pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:58:29.453: INFO: Pod pod-secrets-001d4bf7-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:58:29.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-517" for this suite. +Jun 18 10:58:35.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:35.580: INFO: namespace secrets-517 deletion completed in 6.123612497s + +• [SLOW TEST:8.423 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:35.580: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 18 10:58:35.629: INFO: Waiting up to 5m0s for pod "downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7" in namespace "downward-api-2610" to be "success or failure" +Jun 18 10:58:35.636: INFO: Pod "downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956756ms +Jun 18 10:58:37.640: INFO: Pod "downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010731458s +STEP: Saw pod success +Jun 18 10:58:37.640: INFO: Pod "downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:58:37.642: INFO: Trying to get logs from node ip-172-26-16-178 pod downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 10:58:37.667: INFO: Waiting for pod downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:58:37.670: INFO: Pod downward-api-050c2171-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:58:37.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2610" for this suite. +Jun 18 10:58:43.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:43.816: INFO: namespace downward-api-2610 deletion completed in 6.141043687s + +• [SLOW TEST:8.236 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:43.816: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-3843/configmap-test-09f3de2b-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 10:58:43.864: INFO: Waiting up to 5m0s for pod "pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7" in namespace "configmap-3843" to be "success or failure" +Jun 18 10:58:43.868: INFO: Pod "pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003588ms +Jun 18 10:58:45.872: INFO: Pod "pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008171863s +STEP: Saw pod success +Jun 18 10:58:45.872: INFO: Pod "pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 10:58:45.875: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7 container env-test: +STEP: delete the pod +Jun 18 10:58:45.896: INFO: Waiting for pod pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 10:58:45.898: INFO: Pod pod-configmaps-09f4f64d-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:58:45.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3843" for this suite. +Jun 18 10:58:51.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:58:52.029: INFO: namespace configmap-3843 deletion completed in 6.127045372s + +• [SLOW TEST:8.213 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:58:52.029: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:59:16.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6933" for this suite. +Jun 18 10:59:22.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:59:22.290: INFO: namespace namespaces-6933 deletion completed in 6.122074334s +STEP: Destroying namespace "nsdeletetest-6178" for this suite. +Jun 18 10:59:22.293: INFO: Namespace nsdeletetest-6178 was already deleted +STEP: Destroying namespace "nsdeletetest-7630" for this suite. +Jun 18 10:59:28.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:59:28.413: INFO: namespace nsdeletetest-7630 deletion completed in 6.119668774s + +• [SLOW TEST:36.384 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:59:28.413: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Jun 18 10:59:28.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-397,SelfLink:/api/v1/namespaces/watch-397/configmaps/e2e-watch-test-resource-version,UID:2488cf89-91b8-11e9-8d87-0a902858a792,ResourceVersion:33232,Generation:0,CreationTimestamp:2019-06-18 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 18 10:59:28.479: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-397,SelfLink:/api/v1/namespaces/watch-397/configmaps/e2e-watch-test-resource-version,UID:2488cf89-91b8-11e9-8d87-0a902858a792,ResourceVersion:33233,Generation:0,CreationTimestamp:2019-06-18 10:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:59:28.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-397" for this suite. +Jun 18 10:59:34.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:59:34.608: INFO: namespace watch-397 deletion completed in 6.125950773s + +• [SLOW TEST:6.195 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:59:34.609: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 10:59:34.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 version --client' +Jun 18 10:59:34.693: INFO: stderr: "" +Jun 18 10:59:34.693: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:44:30Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +Jun 18 10:59:34.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-1197' +Jun 18 10:59:34.890: INFO: stderr: "" +Jun 18 10:59:34.890: INFO: stdout: "replicationcontroller/redis-master created\n" +Jun 18 10:59:34.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-1197' +Jun 18 10:59:35.090: INFO: stderr: "" +Jun 18 10:59:35.090: INFO: stdout: "service/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 18 10:59:36.095: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:59:36.095: INFO: Found 0 / 1 +Jun 18 10:59:37.094: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:59:37.094: INFO: Found 1 / 1 +Jun 18 10:59:37.094: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 18 10:59:37.098: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 10:59:37.098: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 18 10:59:37.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 describe pod redis-master-jzvtj --namespace=kubectl-1197' +Jun 18 10:59:37.175: INFO: stderr: "" +Jun 18 10:59:37.175: INFO: stdout: "Name: redis-master-jzvtj\nNamespace: kubectl-1197\nPriority: 0\nPriorityClassName: \nNode: ip-172-26-16-178/172.26.16.178\nStart Time: Tue, 18 Jun 2019 10:59:34 +0000\nLabels: app=redis\n role=master\nAnnotations: cni.projectcalico.org/podIP: 10.42.0.158/32\nStatus: Running\nIP: 10.42.0.158\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://f2a27afa263452226855c690f201545705e073faa16aa4c3fdcc6e0a48ed9548\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 18 Jun 2019 10:59:36 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-29xq8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-29xq8:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-29xq8\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1197/redis-master-jzvtj to ip-172-26-16-178\n Normal Pulled 2s kubelet, ip-172-26-16-178 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, ip-172-26-16-178 Created container redis-master\n Normal Started 1s kubelet, ip-172-26-16-178 Started container redis-master\n" +Jun 18 10:59:37.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 describe rc redis-master --namespace=kubectl-1197' +Jun 18 10:59:37.263: INFO: stderr: "" +Jun 18 10:59:37.263: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1197\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-jzvtj\n" +Jun 18 10:59:37.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 describe service redis-master --namespace=kubectl-1197' +Jun 18 10:59:37.334: INFO: stderr: "" +Jun 18 10:59:37.334: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1197\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.43.85.74\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.42.0.158:6379\nSession Affinity: None\nEvents: \n" +Jun 18 10:59:37.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 describe node ip-172-26-16-178' +Jun 18 10:59:37.420: INFO: stderr: "" +Jun 18 10:59:37.420: INFO: stdout: "Name: ip-172-26-16-178\nRoles: controlplane,etcd,worker\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-172-26-16-178\n kubernetes.io/os=linux\n node-role.kubernetes.io/controlplane=true\n node-role.kubernetes.io/etcd=true\n node-role.kubernetes.io/worker=true\nAnnotations: flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"26:59:29:60:d2:38\"}\n flannel.alpha.coreos.com/backend-type: vxlan\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 172.26.16.178\n node.alpha.kubernetes.io/ttl: 0\n rke.cattle.io/external-ip: 3.93.177.181\n rke.cattle.io/internal-ip: 172.26.16.178\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 18 Jun 2019 08:30:07 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 18 Jun 2019 10:59:11 +0000 Tue, 18 Jun 2019 08:30:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 18 Jun 2019 10:59:11 +0000 Tue, 18 Jun 2019 08:30:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 18 Jun 2019 10:59:11 +0000 Tue, 18 Jun 2019 08:30:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 18 Jun 2019 10:59:11 +0000 Tue, 18 Jun 2019 08:30:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.26.16.178\n Hostname: ip-172-26-16-178\nCapacity:\n cpu: 4\n ephemeral-storage: 325214472Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16424504Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 299717656899\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16322104Ki\n pods: 110\nSystem Info:\n Machine ID: 311936e2303b034fe7ef70182235b8cb\n System UUID: EC2C4873-16E3-78B4-8E2A-D0ED6FA1EB05\n Boot ID: 5227d013-374f-4270-b67c-b5dd03a48c4b\n Kernel Version: 4.15.0-1021-aws\n OS Image: Ubuntu 18.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.6\n Kubelet Version: v1.14.3\n Kube-Proxy Version: v1.14.3\nPodCIDR: 10.42.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n cattle-system cattle-node-agent-pk4wc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 148m\n cattle-system kube-api-auth-9nzcl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 148m\n heptio-sonobuoy sonobuoy-e2e-job-10fdfd8dfec5439f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32m\n heptio-sonobuoy sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-xvczp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32m\n ingress-nginx nginx-ingress-controller-x7drh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 149m\n kube-system canal-kwvpm 250m (6%) 0 (0%) 0 (0%) 0 (0%) 149m\n kube-system coredns-86bc4b7c96-vms9l 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 149m\n kube-system coredns-autoscaler-5d5d49b8ff-7v6zn 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 149m\n kubectl-1197 redis-master-jzvtj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 370m (9%) 0 (0%)\n memory 80Mi (0%) 170Mi (1%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" +Jun 18 10:59:37.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 describe namespace kubectl-1197' +Jun 18 10:59:37.493: INFO: stderr: "" +Jun 18 10:59:37.493: INFO: stdout: "Name: kubectl-1197\nLabels: e2e-framework=kubectl\n e2e-run=a71dca5f-91b3-11e9-8aef-6ab77b36fff7\nAnnotations: cattle.io/status:\n {\"Conditions\":[{\"Type\":\"ResourceQuotaInit\",\"Status\":\"True\",\"Message\":\"\",\"LastUpdateTime\":\"2019-06-18T10:59:35Z\"},{\"Type\":\"InitialRolesPopu...\n lifecycle.cattle.io/create.namespace-auth: true\nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 10:59:37.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1197" for this suite. +Jun 18 10:59:59.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 10:59:59.620: INFO: namespace kubectl-1197 deletion completed in 22.123350795s + +• [SLOW TEST:25.012 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl describe + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 10:59:59.621: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-3722d61e-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 10:59:59.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7" in namespace "configmap-711" to be "success or failure" +Jun 18 10:59:59.672: INFO: Pod "pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.235858ms +Jun 18 11:00:01.676: INFO: Pod "pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007400741s +STEP: Saw pod success +Jun 18 11:00:01.676: INFO: Pod "pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:00:01.679: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:00:01.699: INFO: Waiting for pod pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:00:01.703: INFO: Pod pod-configmaps-3723d67b-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:01.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-711" for this suite. +Jun 18 11:00:07.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:07.854: INFO: namespace configmap-711 deletion completed in 6.145083128s + +• [SLOW TEST:8.233 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:07.854: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1583 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 11:00:07.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-175' +Jun 18 11:00:07.971: INFO: stderr: "" +Jun 18 11:00:07.971: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod was created +[AfterEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1588 +Jun 18 11:00:07.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete pods e2e-test-nginx-pod --namespace=kubectl-175' +Jun 18 11:00:17.213: INFO: stderr: "" +Jun 18 11:00:17.213: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:17.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-175" for this suite. +Jun 18 11:00:23.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:23.342: INFO: namespace kubectl-175 deletion completed in 6.125235034s + +• [SLOW TEST:15.489 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:23.342: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:00:23.390: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-9255" to be "success or failure" +Jun 18 11:00:23.408: INFO: Pod "downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.994931ms +Jun 18 11:00:25.412: INFO: Pod "downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022038674s +STEP: Saw pod success +Jun 18 11:00:25.412: INFO: Pod "downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:00:25.415: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:00:25.436: INFO: Waiting for pod downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:00:25.442: INFO: Pod downwardapi-volume-4546e5b9-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:25.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9255" for this suite. +Jun 18 11:00:31.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:31.570: INFO: namespace projected-9255 deletion completed in 6.123312624s + +• [SLOW TEST:8.227 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:31.570: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-4a2df9ba-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:00:31.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7" in namespace "configmap-6963" to be "success or failure" +Jun 18 11:00:31.623: INFO: Pod "pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66697ms +Jun 18 11:00:33.627: INFO: Pod "pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7": Phase="Running", Reason="", readiness=true. Elapsed: 2.008780874s +Jun 18 11:00:35.631: INFO: Pod "pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012899553s +STEP: Saw pod success +Jun 18 11:00:35.631: INFO: Pod "pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:00:35.634: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:00:35.654: INFO: Waiting for pod pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:00:35.658: INFO: Pod pod-configmaps-4a2f12a8-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:35.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6963" for this suite. +Jun 18 11:00:41.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:41.789: INFO: namespace configmap-6963 deletion completed in 6.127411991s + +• [SLOW TEST:10.219 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:41.789: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-50453fa6-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:00:41.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7" in namespace "configmap-2065" to be "success or failure" +Jun 18 11:00:41.846: INFO: Pod "pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16569ms +Jun 18 11:00:43.850: INFO: Pod "pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007982738s +STEP: Saw pod success +Jun 18 11:00:43.850: INFO: Pod "pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:00:43.853: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:00:43.874: INFO: Waiting for pod pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:00:43.877: INFO: Pod pod-configmaps-5046afed-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:43.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2065" for this suite. +Jun 18 11:00:49.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:50.004: INFO: namespace configmap-2065 deletion completed in 6.122934824s + +• [SLOW TEST:8.215 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:50.004: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:00:50.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7" in namespace "downward-api-3754" to be "success or failure" +Jun 18 11:00:50.055: INFO: Pod "downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455942ms +Jun 18 11:00:52.063: INFO: Pod "downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012184981s +STEP: Saw pod success +Jun 18 11:00:52.063: INFO: Pod "downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:00:52.066: INFO: Trying to get logs from node ip-172-26-16-178 pod downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:00:52.086: INFO: Waiting for pod downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:00:52.090: INFO: Pod downwardapi-volume-552b38b6-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:00:52.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3754" for this suite. +Jun 18 11:00:58.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:00:58.216: INFO: namespace downward-api-3754 deletion completed in 6.121435925s + +• [SLOW TEST:8.212 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:00:58.216: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:00:58.249: INFO: Creating ReplicaSet my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7 +Jun 18 11:00:58.261: INFO: Pod name my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7: Found 0 pods out of 1 +Jun 18 11:01:03.266: INFO: Pod name my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7: Found 1 pods out of 1 +Jun 18 11:01:03.266: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7" is running +Jun 18 11:01:03.269: INFO: Pod "my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7-2pmj6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 11:00:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 11:00:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 11:00:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-18 11:00:58 +0000 UTC Reason: Message:}]) +Jun 18 11:01:03.269: INFO: Trying to dial the pod +Jun 18 11:01:08.279: INFO: Controller my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7: Got expected result from replica 1 [my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7-2pmj6]: "my-hostname-basic-5a0fa282-91b8-11e9-8aef-6ab77b36fff7-2pmj6", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:01:08.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8418" for this suite. +Jun 18 11:01:14.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:01:14.411: INFO: namespace replicaset-8418 deletion completed in 6.12796543s + +• [SLOW TEST:16.195 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:01:14.411: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-63b77e6f-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:01:14.464: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-7151" to be "success or failure" +Jun 18 11:01:14.468: INFO: Pod "pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.763386ms +Jun 18 11:01:16.472: INFO: Pod "pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007821173s +STEP: Saw pod success +Jun 18 11:01:16.472: INFO: Pod "pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:01:16.475: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7 container projected-secret-volume-test: +STEP: delete the pod +Jun 18 11:01:16.495: INFO: Waiting for pod pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:01:16.500: INFO: Pod pod-projected-secrets-63b89497-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:01:16.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7151" for this suite. +Jun 18 11:01:22.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:01:22.629: INFO: namespace projected-7151 deletion completed in 6.124958748s + +• [SLOW TEST:8.218 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:01:22.629: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-689d06eb-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:01:22.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-1083" to be "success or failure" +Jun 18 11:01:22.685: INFO: Pod "pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.972676ms +Jun 18 11:01:24.690: INFO: Pod "pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010233616s +STEP: Saw pod success +Jun 18 11:01:24.690: INFO: Pod "pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:01:24.693: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7 container projected-secret-volume-test: +STEP: delete the pod +Jun 18 11:01:24.716: INFO: Waiting for pod pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:01:24.720: INFO: Pod pod-projected-secrets-689e258e-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:01:24.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1083" for this suite. +Jun 18 11:01:30.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:01:30.856: INFO: namespace projected-1083 deletion completed in 6.132809577s + +• [SLOW TEST:8.227 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:01:30.856: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-721 +Jun 18 11:01:32.905: INFO: Started pod liveness-http in namespace container-probe-721 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 18 11:01:32.908: INFO: Initial restart count of pod liveness-http is 0 +Jun 18 11:01:48.945: INFO: Restart count of pod container-probe-721/liveness-http is now 1 (16.037089496s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:01:48.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-721" for this suite. +Jun 18 11:01:54.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:01:55.109: INFO: namespace container-probe-721 deletion completed in 6.131162848s + +• [SLOW TEST:24.253 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:01:55.109: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Jun 18 11:01:55.161: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1597,SelfLink:/api/v1/namespaces/watch-1597/configmaps/e2e-watch-test-watch-closed,UID:7bf96a39-91b8-11e9-8d87-0a902858a792,ResourceVersion:33869,Generation:0,CreationTimestamp:2019-06-18 11:01:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 18 11:01:55.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1597,SelfLink:/api/v1/namespaces/watch-1597/configmaps/e2e-watch-test-watch-closed,UID:7bf96a39-91b8-11e9-8d87-0a902858a792,ResourceVersion:33870,Generation:0,CreationTimestamp:2019-06-18 11:01:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Jun 18 11:01:55.178: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1597,SelfLink:/api/v1/namespaces/watch-1597/configmaps/e2e-watch-test-watch-closed,UID:7bf96a39-91b8-11e9-8d87-0a902858a792,ResourceVersion:33871,Generation:0,CreationTimestamp:2019-06-18 11:01:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 18 11:01:55.178: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1597,SelfLink:/api/v1/namespaces/watch-1597/configmaps/e2e-watch-test-watch-closed,UID:7bf96a39-91b8-11e9-8d87-0a902858a792,ResourceVersion:33872,Generation:0,CreationTimestamp:2019-06-18 11:01:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:01:55.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1597" for this suite. +Jun 18 11:02:01.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:01.319: INFO: namespace watch-1597 deletion completed in 6.13710818s + +• [SLOW TEST:6.209 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:01.319: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:02:01.355: INFO: Creating deployment "test-recreate-deployment" +Jun 18 11:02:01.363: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Jun 18 11:02:01.370: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Jun 18 11:02:03.377: INFO: Waiting deployment "test-recreate-deployment" to complete +Jun 18 11:02:03.380: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Jun 18 11:02:03.387: INFO: Updating deployment test-recreate-deployment +Jun 18 11:02:03.387: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 18 11:02:03.465: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9799,SelfLink:/apis/apps/v1/namespaces/deployment-9799/deployments/test-recreate-deployment,UID:7faccead-91b8-11e9-8d87-0a902858a792,ResourceVersion:33943,Generation:2,CreationTimestamp:2019-06-18 11:02:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-06-18 11:02:03 +0000 UTC 2019-06-18 11:02:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-18 11:02:03 +0000 UTC 2019-06-18 11:02:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-c9cbd8684" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 18 11:02:03.469: INFO: New ReplicaSet "test-recreate-deployment-c9cbd8684" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684,GenerateName:,Namespace:deployment-9799,SelfLink:/apis/apps/v1/namespaces/deployment-9799/replicasets/test-recreate-deployment-c9cbd8684,UID:80e9bc68-91b8-11e9-8999-0a07e7e61ed8,ResourceVersion:33941,Generation:1,CreationTimestamp:2019-06-18 11:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7faccead-91b8-11e9-8d87-0a902858a792 0xc002ca89f0 0xc002ca89f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 11:02:03.469: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Jun 18 11:02:03.469: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-7d57d5ff7c,GenerateName:,Namespace:deployment-9799,SelfLink:/apis/apps/v1/namespaces/deployment-9799/replicasets/test-recreate-deployment-7d57d5ff7c,UID:7faf0866-91b8-11e9-8999-0a07e7e61ed8,ResourceVersion:33930,Generation:2,CreationTimestamp:2019-06-18 11:02:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7faccead-91b8-11e9-8d87-0a902858a792 0xc002ca88e7 0xc002ca88e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 11:02:03.472: INFO: Pod "test-recreate-deployment-c9cbd8684-zplf4" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684-zplf4,GenerateName:test-recreate-deployment-c9cbd8684-,Namespace:deployment-9799,SelfLink:/api/v1/namespaces/deployment-9799/pods/test-recreate-deployment-c9cbd8684-zplf4,UID:80ea7011-91b8-11e9-8999-0a07e7e61ed8,ResourceVersion:33942,Generation:0,CreationTimestamp:2019-06-18 11:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-c9cbd8684 80e9bc68-91b8-11e9-8999-0a07e7e61ed8 0xc002ca9220 0xc002ca9221}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m9jzw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m9jzw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-m9jzw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca9290} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca92b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:02:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:02:03 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:,StartTime:2019-06-18 11:02:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:03.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9799" for this suite. +Jun 18 11:02:09.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:09.603: INFO: namespace deployment-9799 deletion completed in 6.127382823s + +• [SLOW TEST:8.284 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:09.603: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: validating api versions +Jun 18 11:02:09.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 api-versions' +Jun 18 11:02:09.709: INFO: stderr: "" +Jun 18 11:02:09.709: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncluster.cattle.io/v3\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nmonitoring.coreos.com/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:09.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-512" for this suite. +Jun 18 11:02:15.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:15.833: INFO: namespace kubectl-512 deletion completed in 6.120275848s + +• [SLOW TEST:6.230 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl api-versions + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:15.833: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +STEP: reading a file in the container +Jun 18 11:02:18.403: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5755 pod-service-account-88a251a6-91b8-11e9-8aef-6ab77b36fff7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Jun 18 11:02:18.614: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5755 pod-service-account-88a251a6-91b8-11e9-8aef-6ab77b36fff7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Jun 18 11:02:18.823: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5755 pod-service-account-88a251a6-91b8-11e9-8aef-6ab77b36fff7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-5755" for this suite. +Jun 18 11:02:25.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:25.179: INFO: namespace svcaccounts-5755 deletion completed in 6.147385609s + +• [SLOW TEST:9.346 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:25.179: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Jun 18 11:02:25.274: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34072,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 18 11:02:25.275: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34073,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 18 11:02:25.275: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34074,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Jun 18 11:02:35.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34093,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 18 11:02:35.305: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34094,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +Jun 18 11:02:35.305: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-296,SelfLink:/api/v1/namespaces/watch-296/configmaps/e2e-watch-test-label-changed,UID:8de6acf9-91b8-11e9-8d87-0a902858a792,ResourceVersion:34095,Generation:0,CreationTimestamp:2019-06-18 11:02:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:35.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-296" for this suite. +Jun 18 11:02:41.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:41.438: INFO: namespace watch-296 deletion completed in 6.128738125s + +• [SLOW TEST:16.258 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:41.438: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:02:41.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-6300" to be "success or failure" +Jun 18 11:02:41.492: INFO: Pod "downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.288385ms +Jun 18 11:02:43.496: INFO: Pod "downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009089834s +STEP: Saw pod success +Jun 18 11:02:43.496: INFO: Pod "downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:02:43.499: INFO: Trying to get logs from node ip-172-26-17-1 pod downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:02:43.518: INFO: Waiting for pod downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:02:43.520: INFO: Pod downwardapi-volume-9796f1bb-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:43.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6300" for this suite. +Jun 18 11:02:49.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:49.651: INFO: namespace projected-6300 deletion completed in 6.12655217s + +• [SLOW TEST:8.213 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:49.651: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jun 18 11:02:49.701: INFO: Waiting up to 5m0s for pod "pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7" in namespace "emptydir-1187" to be "success or failure" +Jun 18 11:02:49.706: INFO: Pod "pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.924531ms +Jun 18 11:02:51.710: INFO: Pod "pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009014209s +STEP: Saw pod success +Jun 18 11:02:51.710: INFO: Pod "pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:02:51.713: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:02:51.732: INFO: Waiting for pod pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:02:51.737: INFO: Pod pod-9c7c5661-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:51.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1187" for this suite. +Jun 18 11:02:57.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:02:57.866: INFO: namespace emptydir-1187 deletion completed in 6.124494219s + +• [SLOW TEST:8.215 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:02:57.866: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:02:57.904: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:02:58.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4778" for this suite. +Jun 18 11:03:04.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:03:05.082: INFO: namespace custom-resource-definition-4778 deletion completed in 6.126463333s + +• [SLOW TEST:7.216 seconds] +[sig-api-machinery] CustomResourceDefinition resources +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Simple CustomResourceDefinition + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:03:05.082: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 18 11:03:05.141: INFO: Waiting up to 5m0s for pod "pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7" in namespace "emptydir-8071" to be "success or failure" +Jun 18 11:03:05.144: INFO: Pod "pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394175ms +Jun 18 11:03:07.152: INFO: Pod "pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011367653s +Jun 18 11:03:09.158: INFO: Pod "pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016952118s +STEP: Saw pod success +Jun 18 11:03:09.158: INFO: Pod "pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:03:09.161: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:03:09.185: INFO: Waiting for pod pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:03:09.189: INFO: Pod pod-a5af581e-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:03:09.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8071" for this suite. +Jun 18 11:03:15.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:03:15.320: INFO: namespace emptydir-8071 deletion completed in 6.124964378s + +• [SLOW TEST:10.238 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:03:15.320: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-abca2259-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:03:15.381: INFO: Waiting up to 5m0s for pod "pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7" in namespace "secrets-9148" to be "success or failure" +Jun 18 11:03:15.384: INFO: Pod "pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704174ms +Jun 18 11:03:17.388: INFO: Pod "pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006900967s +Jun 18 11:03:19.392: INFO: Pod "pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010854782s +STEP: Saw pod success +Jun 18 11:03:19.392: INFO: Pod "pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:03:19.395: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 11:03:19.416: INFO: Waiting for pod pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:03:19.419: INFO: Pod pod-secrets-abcb1be9-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:03:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9148" for this suite. +Jun 18 11:03:25.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:03:25.552: INFO: namespace secrets-9148 deletion completed in 6.128782491s + +• [SLOW TEST:10.232 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:03:25.552: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 18 11:03:28.134: INFO: Successfully updated pod "annotationupdateb1e232b3-91b8-11e9-8aef-6ab77b36fff7" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:03:32.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1639" for this suite. +Jun 18 11:03:54.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:03:54.286: INFO: namespace projected-1639 deletion completed in 22.122997531s + +• [SLOW TEST:28.734 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:03:54.286: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-secret-hnq2 +STEP: Creating a pod to test atomic-volume-subpath +Jun 18 11:03:54.340: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hnq2" in namespace "subpath-3661" to be "success or failure" +Jun 18 11:03:54.346: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.737635ms +Jun 18 11:03:56.350: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009943294s +Jun 18 11:03:58.355: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 4.014112337s +Jun 18 11:04:00.359: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 6.018467134s +Jun 18 11:04:02.363: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 8.022641071s +Jun 18 11:04:04.367: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 10.026801052s +Jun 18 11:04:06.372: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 12.03113068s +Jun 18 11:04:08.376: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 14.035132295s +Jun 18 11:04:10.380: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 16.039392164s +Jun 18 11:04:12.384: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 18.043597001s +Jun 18 11:04:14.388: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Running", Reason="", readiness=true. Elapsed: 20.047777534s +Jun 18 11:04:16.392: INFO: Pod "pod-subpath-test-secret-hnq2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.052069888s +STEP: Saw pod success +Jun 18 11:04:16.393: INFO: Pod "pod-subpath-test-secret-hnq2" satisfied condition "success or failure" +Jun 18 11:04:16.395: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-subpath-test-secret-hnq2 container test-container-subpath-secret-hnq2: +STEP: delete the pod +Jun 18 11:04:16.418: INFO: Waiting for pod pod-subpath-test-secret-hnq2 to disappear +Jun 18 11:04:16.421: INFO: Pod pod-subpath-test-secret-hnq2 no longer exists +STEP: Deleting pod pod-subpath-test-secret-hnq2 +Jun 18 11:04:16.421: INFO: Deleting pod "pod-subpath-test-secret-hnq2" in namespace "subpath-3661" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:04:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3661" for this suite. +Jun 18 11:04:22.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:04:22.552: INFO: namespace subpath-3661 deletion completed in 6.12434739s + +• [SLOW TEST:28.266 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:04:22.553: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating secret secrets-6958/secret-test-d3db5022-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:04:22.602: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7" in namespace "secrets-6958" to be "success or failure" +Jun 18 11:04:22.608: INFO: Pod "pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.982999ms +Jun 18 11:04:24.612: INFO: Pod "pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010241015s +STEP: Saw pod success +Jun 18 11:04:24.612: INFO: Pod "pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:04:24.615: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7 container env-test: +STEP: delete the pod +Jun 18 11:04:24.657: INFO: Waiting for pod pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:04:24.660: INFO: Pod pod-configmaps-d3dc5816-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:04:24.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6958" for this suite. +Jun 18 11:04:30.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:04:30.800: INFO: namespace secrets-6958 deletion completed in 6.13609134s + +• [SLOW TEST:8.247 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:04:30.800: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name projected-secret-test-d8c565d8-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:04:30.847: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-9135" to be "success or failure" +Jun 18 11:04:30.850: INFO: Pod "pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919797ms +Jun 18 11:04:32.854: INFO: Pod "pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007062438s +STEP: Saw pod success +Jun 18 11:04:32.854: INFO: Pod "pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:04:32.857: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 11:04:32.876: INFO: Waiting for pod pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:04:32.879: INFO: Pod pod-projected-secrets-d8c66c1d-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:04:32.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9135" for this suite. +Jun 18 11:04:38.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:04:39.011: INFO: namespace projected-9135 deletion completed in 6.127355472s + +• [SLOW TEST:8.211 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:04:39.011: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-4309 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 18 11:04:39.045: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 18 11:05:03.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.1.154:8080/dial?request=hostName&protocol=http&host=10.42.0.166&port=8080&tries=1'] Namespace:pod-network-test-4309 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:05:03.141: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:05:03.305: INFO: Waiting for endpoints: map[] +Jun 18 11:05:03.309: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.1.154:8080/dial?request=hostName&protocol=http&host=10.42.2.156&port=8080&tries=1'] Namespace:pod-network-test-4309 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:05:03.309: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:05:03.485: INFO: Waiting for endpoints: map[] +Jun 18 11:05:03.489: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.1.154:8080/dial?request=hostName&protocol=http&host=10.42.1.153&port=8080&tries=1'] Namespace:pod-network-test-4309 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:05:03.489: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:05:03.664: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:05:03.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4309" for this suite. +Jun 18 11:05:15.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:05:15.799: INFO: namespace pod-network-test-4309 deletion completed in 12.129659094s + +• [SLOW TEST:36.787 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:05:15.799: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-f39878fa-91b8-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:05:15.851: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7" in namespace "projected-674" to be "success or failure" +Jun 18 11:05:15.854: INFO: Pod "pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352065ms +Jun 18 11:05:17.858: INFO: Pod "pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007446114s +STEP: Saw pod success +Jun 18 11:05:17.859: INFO: Pod "pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:05:17.862: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7 container projected-secret-volume-test: +STEP: delete the pod +Jun 18 11:05:17.883: INFO: Waiting for pod pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:05:17.885: INFO: Pod pod-projected-secrets-f3996a15-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:05:17.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-674" for this suite. +Jun 18 11:05:23.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:05:24.012: INFO: namespace projected-674 deletion completed in 6.123021241s + +• [SLOW TEST:8.213 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:05:24.012: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override all +Jun 18 11:05:24.060: INFO: Waiting up to 5m0s for pod "client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7" in namespace "containers-8829" to be "success or failure" +Jun 18 11:05:24.066: INFO: Pod "client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.754424ms +Jun 18 11:05:26.070: INFO: Pod "client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009780597s +STEP: Saw pod success +Jun 18 11:05:26.070: INFO: Pod "client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:05:26.073: INFO: Trying to get logs from node ip-172-26-30-38 pod client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:05:26.097: INFO: Waiting for pod client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:05:26.103: INFO: Pod client-containers-f87dbc6d-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:05:26.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-8829" for this suite. +Jun 18 11:05:32.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:05:32.232: INFO: namespace containers-8829 deletion completed in 6.124911788s + +• [SLOW TEST:8.220 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:05:32.232: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:05:32.285: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7" in namespace "downward-api-3181" to be "success or failure" +Jun 18 11:05:32.291: INFO: Pod "downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.029168ms +Jun 18 11:05:34.295: INFO: Pod "downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009387278s +Jun 18 11:05:36.299: INFO: Pod "downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013509926s +STEP: Saw pod success +Jun 18 11:05:36.299: INFO: Pod "downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:05:36.302: INFO: Trying to get logs from node ip-172-26-17-1 pod downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:05:36.325: INFO: Waiting for pod downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:05:36.328: INFO: Pod downwardapi-volume-fd64cd23-91b8-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:05:36.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3181" for this suite. +Jun 18 11:05:42.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:05:42.454: INFO: namespace downward-api-3181 deletion completed in 6.121453705s + +• [SLOW TEST:10.221 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should scale a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:05:42.454: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should scale a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a replication controller +Jun 18 11:05:42.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-5202' +Jun 18 11:05:42.681: INFO: stderr: "" +Jun 18 11:05:42.681: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 11:05:42.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:05:42.766: INFO: stderr: "" +Jun 18 11:05:42.766: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-pcdmg " +Jun 18 11:05:42.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:42.830: INFO: stderr: "" +Jun 18 11:05:42.830: INFO: stdout: "" +Jun 18 11:05:42.830: INFO: update-demo-nautilus-mhrbb is created but not running +Jun 18 11:05:47.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:05:47.898: INFO: stderr: "" +Jun 18 11:05:47.898: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-pcdmg " +Jun 18 11:05:47.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:47.966: INFO: stderr: "" +Jun 18 11:05:47.966: INFO: stdout: "true" +Jun 18 11:05:47.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:48.031: INFO: stderr: "" +Jun 18 11:05:48.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:05:48.031: INFO: validating pod update-demo-nautilus-mhrbb +Jun 18 11:05:48.035: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:05:48.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:05:48.035: INFO: update-demo-nautilus-mhrbb is verified up and running +Jun 18 11:05:48.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-pcdmg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:48.097: INFO: stderr: "" +Jun 18 11:05:48.097: INFO: stdout: "true" +Jun 18 11:05:48.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-pcdmg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:48.161: INFO: stderr: "" +Jun 18 11:05:48.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:05:48.161: INFO: validating pod update-demo-nautilus-pcdmg +Jun 18 11:05:48.165: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:05:48.165: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:05:48.165: INFO: update-demo-nautilus-pcdmg is verified up and running +STEP: scaling down the replication controller +Jun 18 11:05:48.167: INFO: scanned /root for discovery docs: +Jun 18 11:05:48.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5202' +Jun 18 11:05:49.262: INFO: stderr: "" +Jun 18 11:05:49.262: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 11:05:49.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:05:49.328: INFO: stderr: "" +Jun 18 11:05:49.328: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-pcdmg " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jun 18 11:05:54.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:05:54.395: INFO: stderr: "" +Jun 18 11:05:54.395: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-pcdmg " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jun 18 11:05:59.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:05:59.459: INFO: stderr: "" +Jun 18 11:05:59.459: INFO: stdout: "update-demo-nautilus-mhrbb " +Jun 18 11:05:59.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:59.522: INFO: stderr: "" +Jun 18 11:05:59.522: INFO: stdout: "true" +Jun 18 11:05:59.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:05:59.585: INFO: stderr: "" +Jun 18 11:05:59.585: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:05:59.585: INFO: validating pod update-demo-nautilus-mhrbb +Jun 18 11:05:59.589: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:05:59.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:05:59.589: INFO: update-demo-nautilus-mhrbb is verified up and running +STEP: scaling up the replication controller +Jun 18 11:05:59.590: INFO: scanned /root for discovery docs: +Jun 18 11:05:59.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5202' +Jun 18 11:06:00.679: INFO: stderr: "" +Jun 18 11:06:00.679: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 11:06:00.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:06:00.747: INFO: stderr: "" +Jun 18 11:06:00.747: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-v4zsj " +Jun 18 11:06:00.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:00.810: INFO: stderr: "" +Jun 18 11:06:00.810: INFO: stdout: "true" +Jun 18 11:06:00.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:00.874: INFO: stderr: "" +Jun 18 11:06:00.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:06:00.874: INFO: validating pod update-demo-nautilus-mhrbb +Jun 18 11:06:00.878: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:06:00.878: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:06:00.878: INFO: update-demo-nautilus-mhrbb is verified up and running +Jun 18 11:06:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-v4zsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:00.942: INFO: stderr: "" +Jun 18 11:06:00.942: INFO: stdout: "" +Jun 18 11:06:00.942: INFO: update-demo-nautilus-v4zsj is created but not running +Jun 18 11:06:05.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5202' +Jun 18 11:06:06.008: INFO: stderr: "" +Jun 18 11:06:06.008: INFO: stdout: "update-demo-nautilus-mhrbb update-demo-nautilus-v4zsj " +Jun 18 11:06:06.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:06.071: INFO: stderr: "" +Jun 18 11:06:06.071: INFO: stdout: "true" +Jun 18 11:06:06.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-mhrbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:06.134: INFO: stderr: "" +Jun 18 11:06:06.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:06:06.134: INFO: validating pod update-demo-nautilus-mhrbb +Jun 18 11:06:06.138: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:06:06.138: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:06:06.138: INFO: update-demo-nautilus-mhrbb is verified up and running +Jun 18 11:06:06.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-v4zsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:06.201: INFO: stderr: "" +Jun 18 11:06:06.201: INFO: stdout: "true" +Jun 18 11:06:06.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-v4zsj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5202' +Jun 18 11:06:06.265: INFO: stderr: "" +Jun 18 11:06:06.265: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:06:06.265: INFO: validating pod update-demo-nautilus-v4zsj +Jun 18 11:06:06.270: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:06:06.270: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:06:06.270: INFO: update-demo-nautilus-v4zsj is verified up and running +STEP: using delete to clean up resources +Jun 18 11:06:06.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-5202' +Jun 18 11:06:06.338: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:06:06.338: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 18 11:06:06.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5202' +Jun 18 11:06:06.409: INFO: stderr: "No resources found.\n" +Jun 18 11:06:06.409: INFO: stdout: "" +Jun 18 11:06:06.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -l name=update-demo --namespace=kubectl-5202 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 18 11:06:06.474: INFO: stderr: "" +Jun 18 11:06:06.474: INFO: stdout: "update-demo-nautilus-mhrbb\nupdate-demo-nautilus-v4zsj\n" +Jun 18 11:06:06.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5202' +Jun 18 11:06:07.044: INFO: stderr: "No resources found.\n" +Jun 18 11:06:07.044: INFO: stdout: "" +Jun 18 11:06:07.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -l name=update-demo --namespace=kubectl-5202 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 18 11:06:07.110: INFO: stderr: "" +Jun 18 11:06:07.110: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:06:07.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5202" for this suite. +Jun 18 11:06:29.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:06:29.246: INFO: namespace kubectl-5202 deletion completed in 22.131391445s + +• [SLOW TEST:46.792 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should scale a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:06:29.246: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-2058 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 18 11:06:29.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 18 11:06:51.375: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.0.170:8080/dial?request=hostName&protocol=udp&host=10.42.1.157&port=8081&tries=1'] Namespace:pod-network-test-2058 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:06:51.375: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:06:51.525: INFO: Waiting for endpoints: map[] +Jun 18 11:06:51.528: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.0.170:8080/dial?request=hostName&protocol=udp&host=10.42.0.169&port=8081&tries=1'] Namespace:pod-network-test-2058 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:06:51.528: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:06:51.695: INFO: Waiting for endpoints: map[] +Jun 18 11:06:51.698: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.0.170:8080/dial?request=hostName&protocol=udp&host=10.42.2.159&port=8081&tries=1'] Namespace:pod-network-test-2058 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:06:51.698: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:06:51.866: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:06:51.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-2058" for this suite. +Jun 18 11:07:13.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:07:13.993: INFO: namespace pod-network-test-2058 deletion completed in 22.122609745s + +• [SLOW TEST:44.747 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:07:13.993: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-3a0b539b-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:07:14.045: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7" in namespace "projected-3431" to be "success or failure" +Jun 18 11:07:14.049: INFO: Pod "pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662682ms +Jun 18 11:07:16.054: INFO: Pod "pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008889032s +STEP: Saw pod success +Jun 18 11:07:16.054: INFO: Pod "pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:07:16.056: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 11:07:16.082: INFO: Waiting for pod pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:07:16.090: INFO: Pod pod-projected-configmaps-3a0c49ff-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:07:16.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3431" for this suite. +Jun 18 11:07:22.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:07:22.222: INFO: namespace projected-3431 deletion completed in 6.12822693s + +• [SLOW TEST:8.229 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:07:22.222: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:07:22.269: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7" in namespace "projected-1244" to be "success or failure" +Jun 18 11:07:22.273: INFO: Pod "downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085462ms +Jun 18 11:07:24.277: INFO: Pod "downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008263697s +STEP: Saw pod success +Jun 18 11:07:24.277: INFO: Pod "downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:07:24.280: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:07:24.303: INFO: Waiting for pod downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:07:24.309: INFO: Pod downwardapi-volume-3ef2cd11-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:07:24.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1244" for this suite. +Jun 18 11:07:30.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:07:30.451: INFO: namespace projected-1244 deletion completed in 6.136767332s + +• [SLOW TEST:8.229 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:07:30.451: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-43da8494-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:07:30.538: INFO: Waiting up to 5m0s for pod "pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7" in namespace "secrets-4839" to be "success or failure" +Jun 18 11:07:30.541: INFO: Pod "pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.353394ms +Jun 18 11:07:32.545: INFO: Pod "pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007468127s +STEP: Saw pod success +Jun 18 11:07:32.545: INFO: Pod "pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:07:32.548: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 11:07:32.571: INFO: Waiting for pod pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:07:32.573: INFO: Pod pod-secrets-43e1162a-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:07:32.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4839" for this suite. +Jun 18 11:07:38.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:07:38.715: INFO: namespace secrets-4839 deletion completed in 6.13665756s +STEP: Destroying namespace "secret-namespace-5186" for this suite. +Jun 18 11:07:44.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:07:44.845: INFO: namespace secret-namespace-5186 deletion completed in 6.130891922s + +• [SLOW TEST:14.395 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:07:44.846: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name s-test-opt-del-4c6f0d71-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating secret with name s-test-opt-upd-4c6f0db1-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-4c6f0d71-91b9-11e9-8aef-6ab77b36fff7 +STEP: Updating secret s-test-opt-upd-4c6f0db1-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating secret with name s-test-opt-create-4c6f0dcd-91b9-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:07:48.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9972" for this suite. +Jun 18 11:08:11.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:08:11.109: INFO: namespace secrets-9972 deletion completed in 22.120341751s + +• [SLOW TEST:26.264 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:08:11.109: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 18 11:08:11.155: INFO: Waiting up to 5m0s for pod "downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7" in namespace "downward-api-3488" to be "success or failure" +Jun 18 11:08:11.158: INFO: Pod "downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409011ms +Jun 18 11:08:13.163: INFO: Pod "downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007651442s +STEP: Saw pod success +Jun 18 11:08:13.163: INFO: Pod "downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:08:13.166: INFO: Trying to get logs from node ip-172-26-30-38 pod downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 11:08:13.193: INFO: Waiting for pod downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:08:13.199: INFO: Pod downward-api-5c166522-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:08:13.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3488" for this suite. +Jun 18 11:08:19.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:08:19.325: INFO: namespace downward-api-3488 deletion completed in 6.121316249s + +• [SLOW TEST:8.215 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:08:19.325: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:08:19.368: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7" in namespace "projected-7186" to be "success or failure" +Jun 18 11:08:19.373: INFO: Pod "downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.679336ms +Jun 18 11:08:21.378: INFO: Pod "downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009936521s +STEP: Saw pod success +Jun 18 11:08:21.378: INFO: Pod "downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:08:21.381: INFO: Trying to get logs from node ip-172-26-17-1 pod downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:08:21.401: INFO: Waiting for pod downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:08:21.404: INFO: Pod downwardapi-volume-60fb7369-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:08:21.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7186" for this suite. +Jun 18 11:08:27.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:08:27.532: INFO: namespace projected-7186 deletion completed in 6.123695047s + +• [SLOW TEST:8.208 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:08:27.533: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override arguments +Jun 18 11:08:27.607: INFO: Waiting up to 5m0s for pod "client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7" in namespace "containers-2842" to be "success or failure" +Jun 18 11:08:27.618: INFO: Pod "client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.557091ms +Jun 18 11:08:29.621: INFO: Pod "client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0139297s +STEP: Saw pod success +Jun 18 11:08:29.621: INFO: Pod "client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:08:29.624: INFO: Trying to get logs from node ip-172-26-17-1 pod client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:08:29.645: INFO: Waiting for pod client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:08:29.647: INFO: Pod client-containers-65e395b9-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:08:29.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2842" for this suite. +Jun 18 11:08:35.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:08:35.778: INFO: namespace containers-2842 deletion completed in 6.127048261s + +• [SLOW TEST:8.246 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update + should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:08:35.778: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 +[It] should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 11:08:35.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1300' +Jun 18 11:08:36.090: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 18 11:08:36.090: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +Jun 18 11:08:36.102: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 +STEP: rolling-update to same image controller +Jun 18 11:08:36.108: INFO: scanned /root for discovery docs: +Jun 18 11:08:36.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1300' +Jun 18 11:08:51.878: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Jun 18 11:08:51.878: INFO: stdout: "Created e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9\nScaling up e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +Jun 18 11:08:51.878: INFO: stdout: "Created e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9\nScaling up e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. +Jun 18 11:08:51.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1300' +Jun 18 11:08:51.944: INFO: stderr: "" +Jun 18 11:08:51.944: INFO: stdout: "e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9-c597x " +Jun 18 11:08:51.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9-c597x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1300' +Jun 18 11:08:52.008: INFO: stderr: "" +Jun 18 11:08:52.008: INFO: stdout: "true" +Jun 18 11:08:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9-c597x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1300' +Jun 18 11:08:52.071: INFO: stderr: "" +Jun 18 11:08:52.071: INFO: stdout: "docker.io/library/nginx:1.14-alpine" +Jun 18 11:08:52.071: INFO: e2e-test-nginx-rc-8f967261c37704e9ccb1993448d1c4e9-c597x is verified up and running +[AfterEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 +Jun 18 11:08:52.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete rc e2e-test-nginx-rc --namespace=kubectl-1300' +Jun 18 11:08:52.142: INFO: stderr: "" +Jun 18 11:08:52.142: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:08:52.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1300" for this suite. +Jun 18 11:08:58.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:08:58.297: INFO: namespace kubectl-1300 deletion completed in 6.141322266s + +• [SLOW TEST:22.519 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:08:58.297: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-map-7836c2b7-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:08:58.348: INFO: Waiting up to 5m0s for pod "pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7" in namespace "secrets-1875" to be "success or failure" +Jun 18 11:08:58.353: INFO: Pod "pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.178997ms +Jun 18 11:09:00.358: INFO: Pod "pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009744104s +STEP: Saw pod success +Jun 18 11:09:00.358: INFO: Pod "pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:09:00.361: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 11:09:00.382: INFO: Waiting for pod pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:09:00.385: INFO: Pod pod-secrets-7837be90-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:09:00.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1875" for this suite. +Jun 18 11:09:06.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:09:06.511: INFO: namespace secrets-1875 deletion completed in 6.121964787s + +• [SLOW TEST:8.214 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:09:06.511: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-configmap-2tbg +STEP: Creating a pod to test atomic-volume-subpath +Jun 18 11:09:06.564: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2tbg" in namespace "subpath-9123" to be "success or failure" +Jun 18 11:09:06.567: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.478374ms +Jun 18 11:09:08.572: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 2.007642515s +Jun 18 11:09:10.575: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 4.011382359s +Jun 18 11:09:12.579: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 6.015465933s +Jun 18 11:09:14.584: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 8.019593306s +Jun 18 11:09:16.588: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 10.023582893s +Jun 18 11:09:18.591: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 12.027453568s +Jun 18 11:09:20.597: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 14.032905745s +Jun 18 11:09:22.601: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 16.036863966s +Jun 18 11:09:24.605: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 18.040755434s +Jun 18 11:09:26.609: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Running", Reason="", readiness=true. Elapsed: 20.044657773s +Jun 18 11:09:28.613: INFO: Pod "pod-subpath-test-configmap-2tbg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.048660465s +STEP: Saw pod success +Jun 18 11:09:28.613: INFO: Pod "pod-subpath-test-configmap-2tbg" satisfied condition "success or failure" +Jun 18 11:09:28.616: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-subpath-test-configmap-2tbg container test-container-subpath-configmap-2tbg: +STEP: delete the pod +Jun 18 11:09:28.640: INFO: Waiting for pod pod-subpath-test-configmap-2tbg to disappear +Jun 18 11:09:28.646: INFO: Pod pod-subpath-test-configmap-2tbg no longer exists +STEP: Deleting pod pod-subpath-test-configmap-2tbg +Jun 18 11:09:28.646: INFO: Deleting pod "pod-subpath-test-configmap-2tbg" in namespace "subpath-9123" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:09:28.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-9123" for this suite. +Jun 18 11:09:34.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:09:34.800: INFO: namespace subpath-9123 deletion completed in 6.148264773s + +• [SLOW TEST:28.289 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:09:34.801: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 18 11:09:34.886: INFO: Number of nodes with available pods: 0 +Jun 18 11:09:34.886: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:09:35.894: INFO: Number of nodes with available pods: 0 +Jun 18 11:09:35.894: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:09:36.895: INFO: Number of nodes with available pods: 2 +Jun 18 11:09:36.895: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:09:37.895: INFO: Number of nodes with available pods: 3 +Jun 18 11:09:37.895: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Jun 18 11:09:37.916: INFO: Number of nodes with available pods: 2 +Jun 18 11:09:37.916: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:09:38.924: INFO: Number of nodes with available pods: 2 +Jun 18 11:09:38.924: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:09:39.924: INFO: Number of nodes with available pods: 3 +Jun 18 11:09:39.924: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4302, will wait for the garbage collector to delete the pods +Jun 18 11:09:39.992: INFO: Deleting DaemonSet.extensions daemon-set took: 7.807702ms +Jun 18 11:09:40.492: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.243815ms +Jun 18 11:09:47.696: INFO: Number of nodes with available pods: 0 +Jun 18 11:09:47.696: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 18 11:09:47.699: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4302/daemonsets","resourceVersion":"36017"},"items":null} + +Jun 18 11:09:47.702: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4302/pods","resourceVersion":"36017"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:09:47.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4302" for this suite. +Jun 18 11:09:53.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:09:53.843: INFO: namespace daemonsets-4302 deletion completed in 6.125063264s + +• [SLOW TEST:19.042 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:09:53.843: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:09:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-6973" for this suite. +Jun 18 11:10:39.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:10:40.036: INFO: namespace kubelet-test-6973 deletion completed in 44.122998881s + +• [SLOW TEST:46.192 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a read only busybox container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:10:40.036: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 18 11:10:40.073: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 18 11:10:40.079: INFO: Waiting for terminating namespaces to be deleted... +Jun 18 11:10:40.082: INFO: +Logging pods the kubelet thinks is on node ip-172-26-16-178 before test +Jun 18 11:10:40.089: INFO: rke-coredns-addon-deploy-job-4b9ct from kube-system started at 2019-06-18 08:30:22 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container rke-coredns-addon-pod ready: false, restart count 0 +Jun 18 11:10:40.089: INFO: rke-metrics-addon-deploy-job-f4q28 from kube-system started at 2019-06-18 08:30:27 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container rke-metrics-addon-pod ready: false, restart count 0 +Jun 18 11:10:40.089: INFO: coredns-autoscaler-5d5d49b8ff-7v6zn from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container autoscaler ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: rke-network-plugin-deploy-job-c76n7 from kube-system started at 2019-06-18 08:30:17 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container rke-network-plugin-pod ready: false, restart count 0 +Jun 18 11:10:40.089: INFO: canal-kwvpm from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: coredns-86bc4b7c96-vms9l from kube-system started at 2019-06-18 08:30:28 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container coredns ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: nginx-ingress-controller-x7drh from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: kube-api-auth-9nzcl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-xvczp from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: Container systemd-logs ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: rke-ingress-controller-deploy-job-697mh from kube-system started at 2019-06-18 08:30:32 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container rke-ingress-controller-pod ready: false, restart count 0 +Jun 18 11:10:40.089: INFO: cattle-node-agent-pk4wc from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container agent ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: sonobuoy-e2e-job-10fdfd8dfec5439f from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.089: INFO: Container e2e ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 11:10:40.089: INFO: +Logging pods the kubelet thinks is on node ip-172-26-17-1 before test +Jun 18 11:10:40.094: INFO: nginx-ingress-controller-98bp5 from ingress-nginx started at 2019-06-18 08:30:37 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-18 10:27:17 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: canal-9q452 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: kube-api-auth-6mld7 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: cattle-cluster-agent-6b589fd864-hhp9v from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container cluster-register ready: true, restart count 0 +Jun 18 11:10:40.094: INFO: cattle-node-agent-8k6f2 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.094: INFO: Container agent ready: true, restart count 0 +Jun 18 11:10:40.095: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-j5st8 from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.095: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 11:10:40.095: INFO: Container systemd-logs ready: true, restart count 0 +Jun 18 11:10:40.095: INFO: +Logging pods the kubelet thinks is on node ip-172-26-30-38 before test +Jun 18 11:10:40.099: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-mmnqj from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: Container systemd-logs ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: default-http-backend-5954bd5d8c-t6btz from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container default-http-backend ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: cattle-node-agent-pk6fl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container agent ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: canal-wnpt7 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: metrics-server-7f6bd4c888-n4w2m from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container metrics-server ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: nginx-ingress-controller-nqmmq from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:10:40.099: INFO: kube-api-auth-j5sk5 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:10:40.099: INFO: Container kube-api-auth ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-b6152197-91b9-11e9-8aef-6ab77b36fff7 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-b6152197-91b9-11e9-8aef-6ab77b36fff7 off the node ip-172-26-17-1 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-b6152197-91b9-11e9-8aef-6ab77b36fff7 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:10:44.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3063" for this suite. +Jun 18 11:11:02.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:11:02.315: INFO: namespace sched-pred-3063 deletion completed in 18.136299031s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:22.279 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:11:02.315: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:11:02.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7" in namespace "downward-api-6669" to be "success or failure" +Jun 18 11:11:02.382: INFO: Pod "downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47376ms +Jun 18 11:11:04.387: INFO: Pod "downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009151213s +Jun 18 11:11:06.401: INFO: Pod "downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023295425s +STEP: Saw pod success +Jun 18 11:11:06.401: INFO: Pod "downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:11:06.404: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:11:06.425: INFO: Waiting for pod downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:11:06.429: INFO: Pod downwardapi-volume-c224578a-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:11:06.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6669" for this suite. +Jun 18 11:11:12.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:11:12.559: INFO: namespace downward-api-6669 deletion completed in 6.125462525s + +• [SLOW TEST:10.244 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:11:12.559: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 18 11:11:12.594: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:11:16.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-189" for this suite. +Jun 18 11:11:38.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:11:38.773: INFO: namespace init-container-189 deletion completed in 22.125507445s + +• [SLOW TEST:26.214 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:11:38.774: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 18 11:11:38.816: INFO: Waiting up to 5m0s for pod "downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7" in namespace "downward-api-1501" to be "success or failure" +Jun 18 11:11:38.821: INFO: Pod "downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738802ms +Jun 18 11:11:40.825: INFO: Pod "downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008893842s +Jun 18 11:11:42.829: INFO: Pod "downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012760772s +STEP: Saw pod success +Jun 18 11:11:42.829: INFO: Pod "downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:11:42.832: INFO: Trying to get logs from node ip-172-26-30-38 pod downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 11:11:42.853: INFO: Waiting for pod downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:11:42.860: INFO: Pod downward-api-d7dcf0e3-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:11:42.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1501" for this suite. +Jun 18 11:11:48.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:11:48.984: INFO: namespace downward-api-1501 deletion completed in 6.11936903s + +• [SLOW TEST:10.210 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:11:48.984: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-ddf3da93-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:11:49.037: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7" in namespace "projected-5167" to be "success or failure" +Jun 18 11:11:49.040: INFO: Pod "pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930378ms +Jun 18 11:11:51.044: INFO: Pod "pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006902755s +Jun 18 11:11:53.048: INFO: Pod "pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010876925s +STEP: Saw pod success +Jun 18 11:11:53.048: INFO: Pod "pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:11:53.051: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 11:11:53.088: INFO: Waiting for pod pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:11:53.091: INFO: Pod pod-projected-configmaps-ddf4d836-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:11:53.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5167" for this suite. +Jun 18 11:11:59.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:11:59.229: INFO: namespace projected-5167 deletion completed in 6.125065019s + +• [SLOW TEST:10.245 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:11:59.229: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:11:59.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7" in namespace "downward-api-353" to be "success or failure" +Jun 18 11:11:59.280: INFO: Pod "downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284604ms +Jun 18 11:12:01.284: INFO: Pod "downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008701839s +STEP: Saw pod success +Jun 18 11:12:01.284: INFO: Pod "downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:12:01.288: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:12:01.315: INFO: Waiting for pod downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:12:01.318: INFO: Pod downwardapi-volume-e40e5210-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:12:01.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-353" for this suite. +Jun 18 11:12:07.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:12:07.453: INFO: namespace downward-api-353 deletion completed in 6.130403478s + +• [SLOW TEST:8.225 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:12:07.453: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-e8f67876-91b9-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:12:07.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7" in namespace "projected-5125" to be "success or failure" +Jun 18 11:12:07.517: INFO: Pod "pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630656ms +Jun 18 11:12:09.522: INFO: Pod "pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011156661s +Jun 18 11:12:11.526: INFO: Pod "pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015574046s +STEP: Saw pod success +Jun 18 11:12:11.526: INFO: Pod "pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:12:11.529: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7 container projected-secret-volume-test: +STEP: delete the pod +Jun 18 11:12:11.549: INFO: Waiting for pod pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:12:11.552: INFO: Pod pod-projected-secrets-e8f7c25e-91b9-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:12:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5125" for this suite. +Jun 18 11:12:17.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:12:17.683: INFO: namespace projected-5125 deletion completed in 6.127802574s + +• [SLOW TEST:10.230 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:12:17.684: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 18 11:12:20.265: INFO: Successfully updated pod "annotationupdateef0f06a6-91b9-11e9-8aef-6ab77b36fff7" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:12:22.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-393" for this suite. +Jun 18 11:12:44.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:12:44.417: INFO: namespace downward-api-393 deletion completed in 22.132277454s + +• [SLOW TEST:26.734 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:12:44.418: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating service endpoint-test2 in namespace services-9927 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9927 to expose endpoints map[] +Jun 18 11:12:44.469: INFO: successfully validated that service endpoint-test2 in namespace services-9927 exposes endpoints map[] (4.456609ms elapsed) +STEP: Creating pod pod1 in namespace services-9927 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9927 to expose endpoints map[pod1:[80]] +Jun 18 11:12:47.508: INFO: successfully validated that service endpoint-test2 in namespace services-9927 exposes endpoints map[pod1:[80]] (3.028635466s elapsed) +STEP: Creating pod pod2 in namespace services-9927 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9927 to expose endpoints map[pod1:[80] pod2:[80]] +Jun 18 11:12:49.543: INFO: successfully validated that service endpoint-test2 in namespace services-9927 exposes endpoints map[pod1:[80] pod2:[80]] (2.030257058s elapsed) +STEP: Deleting pod pod1 in namespace services-9927 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9927 to expose endpoints map[pod2:[80]] +Jun 18 11:12:49.568: INFO: successfully validated that service endpoint-test2 in namespace services-9927 exposes endpoints map[pod2:[80]] (18.437482ms elapsed) +STEP: Deleting pod pod2 in namespace services-9927 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9927 to expose endpoints map[] +Jun 18 11:12:50.582: INFO: successfully validated that service endpoint-test2 in namespace services-9927 exposes endpoints map[] (1.00687619s elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:12:50.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9927" for this suite. +Jun 18 11:13:12.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:13:12.756: INFO: namespace services-9927 deletion completed in 22.132769215s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:28.339 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:13:12.757: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name s-test-opt-del-0fe41ba1-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating secret with name s-test-opt-upd-0fe41be1-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-0fe41ba1-91ba-11e9-8aef-6ab77b36fff7 +STEP: Updating secret s-test-opt-upd-0fe41be1-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating secret with name s-test-opt-create-0fe41bfe-91ba-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:13:18.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-368" for this suite. +Jun 18 11:13:40.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:13:41.051: INFO: namespace projected-368 deletion completed in 22.124469606s + +• [SLOW TEST:28.295 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:13:41.051: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-20c05f9d-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:13:41.108: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7" in namespace "projected-2049" to be "success or failure" +Jun 18 11:13:41.111: INFO: Pod "pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674522ms +Jun 18 11:13:43.115: INFO: Pod "pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543791s +Jun 18 11:13:45.122: INFO: Pod "pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014751572s +STEP: Saw pod success +Jun 18 11:13:45.123: INFO: Pod "pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:13:45.125: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7 container projected-secret-volume-test: +STEP: delete the pod +Jun 18 11:13:45.145: INFO: Waiting for pod pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:13:45.148: INFO: Pod pod-projected-secrets-20c16de7-91ba-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:13:45.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2049" for this suite. +Jun 18 11:13:51.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:13:51.281: INFO: namespace projected-2049 deletion completed in 6.128373912s + +• [SLOW TEST:10.229 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:13:51.281: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-projected-46rq +STEP: Creating a pod to test atomic-volume-subpath +Jun 18 11:13:51.339: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-46rq" in namespace "subpath-1626" to be "success or failure" +Jun 18 11:13:51.343: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.460651ms +Jun 18 11:13:53.347: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007700145s +Jun 18 11:13:55.351: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 4.012077749s +Jun 18 11:13:57.355: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 6.016049637s +Jun 18 11:13:59.359: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 8.019849777s +Jun 18 11:14:01.363: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 10.024058879s +Jun 18 11:14:03.368: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 12.028603126s +Jun 18 11:14:05.372: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 14.032818007s +Jun 18 11:14:07.376: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 16.036977326s +Jun 18 11:14:09.380: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 18.041099786s +Jun 18 11:14:11.385: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Running", Reason="", readiness=true. Elapsed: 20.04539844s +Jun 18 11:14:13.389: INFO: Pod "pod-subpath-test-projected-46rq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.049366721s +STEP: Saw pod success +Jun 18 11:14:13.389: INFO: Pod "pod-subpath-test-projected-46rq" satisfied condition "success or failure" +Jun 18 11:14:13.391: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-subpath-test-projected-46rq container test-container-subpath-projected-46rq: +STEP: delete the pod +Jun 18 11:14:13.411: INFO: Waiting for pod pod-subpath-test-projected-46rq to disappear +Jun 18 11:14:13.413: INFO: Pod pod-subpath-test-projected-46rq no longer exists +STEP: Deleting pod pod-subpath-test-projected-46rq +Jun 18 11:14:13.413: INFO: Deleting pod "pod-subpath-test-projected-46rq" in namespace "subpath-1626" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:14:13.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1626" for this suite. +Jun 18 11:14:19.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:14:19.544: INFO: namespace subpath-1626 deletion completed in 6.124355872s + +• [SLOW TEST:28.263 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:14:19.544: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-2905 +Jun 18 11:14:21.600: INFO: Started pod liveness-http in namespace container-probe-2905 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 18 11:14:21.602: INFO: Initial restart count of pod liveness-http is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:18:22.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2905" for this suite. +Jun 18 11:18:28.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:18:28.248: INFO: namespace container-probe-2905 deletion completed in 6.145308579s + +• [SLOW TEST:248.703 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:18:28.248: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-cbf2039a-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating configMap with name cm-test-opt-upd-cbf203df-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-cbf2039a-91ba-11e9-8aef-6ab77b36fff7 +STEP: Updating configmap cm-test-opt-upd-cbf203df-91ba-11e9-8aef-6ab77b36fff7 +STEP: Creating configMap with name cm-test-opt-create-cbf20405-91ba-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:19:38.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-451" for this suite. +Jun 18 11:20:00.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:20:00.837: INFO: namespace configmap-451 deletion completed in 22.132606608s + +• [SLOW TEST:92.589 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:20:00.837: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:20:04.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1594" for this suite. +Jun 18 11:20:42.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:20:43.026: INFO: namespace kubelet-test-1594 deletion completed in 38.121824224s + +• [SLOW TEST:42.188 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:20:43.026: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:20:43.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7" in namespace "projected-2502" to be "success or failure" +Jun 18 11:20:43.074: INFO: Pod "downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275566ms +Jun 18 11:20:45.078: INFO: Pod "downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008662295s +STEP: Saw pod success +Jun 18 11:20:45.078: INFO: Pod "downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:20:45.082: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:20:45.102: INFO: Waiting for pod downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:20:45.107: INFO: Pod downwardapi-volume-1c43532e-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:20:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2502" for this suite. +Jun 18 11:20:51.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:20:51.241: INFO: namespace projected-2502 deletion completed in 6.129563514s + +• [SLOW TEST:8.215 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] [sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:20:51.241: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Jun 18 11:20:53.305: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2128fec2-91bb-11e9-8aef-6ab77b36fff7,GenerateName:,Namespace:events-2155,SelfLink:/api/v1/namespaces/events-2155/pods/send-events-2128fec2-91bb-11e9-8aef-6ab77b36fff7,UID:212a0fcc-91bb-11e9-8d87-0a902858a792,ResourceVersion:37888,Generation:0,CreationTimestamp:2019-06-18 11:20:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 275750865,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.176/32,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c77fj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c77fj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c77fj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b95470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b954e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:20:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:20:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:20:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:20:51 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.176,StartTime:2019-06-18 11:20:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-18 11:20:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://6946d459c522fe4376916dce8730787a390bc64ce8acf142392f1c9a5903f77a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} + +STEP: checking for scheduler event about the pod +Jun 18 11:20:55.309: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Jun 18 11:20:57.313: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:20:57.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2155" for this suite. +Jun 18 11:21:35.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:21:35.463: INFO: namespace events-2155 deletion completed in 38.13490262s + +• [SLOW TEST:44.222 seconds] +[k8s.io] [sig-node] Events +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:21:35.463: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-3b8804e7-91bb-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Updating configmap configmap-test-upd-3b8804e7-91bb-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:21:39.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9240" for this suite. +Jun 18 11:22:01.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:22:01.715: INFO: namespace configmap-9240 deletion completed in 22.137029988s + +• [SLOW TEST:26.252 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:22:01.715: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-4b2a21eb-91bb-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:22:01.762: INFO: Waiting up to 5m0s for pod "pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7" in namespace "secrets-3651" to be "success or failure" +Jun 18 11:22:01.764: INFO: Pod "pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812462ms +Jun 18 11:22:03.769: INFO: Pod "pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006954332s +STEP: Saw pod success +Jun 18 11:22:03.769: INFO: Pod "pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:22:03.772: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7 container secret-env-test: +STEP: delete the pod +Jun 18 11:22:03.792: INFO: Waiting for pod pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:22:03.798: INFO: Pod pod-secrets-4b2b5a42-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:22:03.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3651" for this suite. +Jun 18 11:22:09.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:22:09.928: INFO: namespace secrets-3651 deletion completed in 6.126128275s + +• [SLOW TEST:8.213 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:22:09.928: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:22:09.998: INFO: Create a RollingUpdate DaemonSet +Jun 18 11:22:10.005: INFO: Check that daemon pods launch on every node of the cluster +Jun 18 11:22:10.012: INFO: Number of nodes with available pods: 0 +Jun 18 11:22:10.012: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:22:11.020: INFO: Number of nodes with available pods: 0 +Jun 18 11:22:11.020: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:22:12.019: INFO: Number of nodes with available pods: 3 +Jun 18 11:22:12.019: INFO: Number of running nodes: 3, number of available pods: 3 +Jun 18 11:22:12.019: INFO: Update the DaemonSet to trigger a rollout +Jun 18 11:22:12.027: INFO: Updating DaemonSet daemon-set +Jun 18 11:22:16.037: INFO: Roll back the DaemonSet before rollout is complete +Jun 18 11:22:16.046: INFO: Updating DaemonSet daemon-set +Jun 18 11:22:16.046: INFO: Make sure DaemonSet rollback is complete +Jun 18 11:22:16.051: INFO: Wrong image for pod: daemon-set-4vkgk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. +Jun 18 11:22:16.051: INFO: Pod daemon-set-4vkgk is not available +Jun 18 11:22:17.058: INFO: Wrong image for pod: daemon-set-4vkgk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. +Jun 18 11:22:17.058: INFO: Pod daemon-set-4vkgk is not available +Jun 18 11:22:18.059: INFO: Pod daemon-set-v7wkj is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3421, will wait for the garbage collector to delete the pods +Jun 18 11:22:18.129: INFO: Deleting DaemonSet.extensions daemon-set took: 8.129329ms +Jun 18 11:22:18.629: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.237135ms +Jun 18 11:22:27.733: INFO: Number of nodes with available pods: 0 +Jun 18 11:22:27.733: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 18 11:22:27.736: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3421/daemonsets","resourceVersion":"38240"},"items":null} + +Jun 18 11:22:27.739: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3421/pods","resourceVersion":"38240"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:22:27.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3421" for this suite. +Jun 18 11:22:33.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:22:33.877: INFO: namespace daemonsets-3421 deletion completed in 6.123094392s + +• [SLOW TEST:23.949 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Guestbook application + should create and stop a working application [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:22:33.878: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create and stop a working application [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating all guestbook components +Jun 18 11:22:33.914: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-slave + labels: + app: redis + role: slave + tier: backend +spec: + ports: + - port: 6379 + selector: + app: redis + role: slave + tier: backend + +Jun 18 11:22:33.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:34.304: INFO: stderr: "" +Jun 18 11:22:34.304: INFO: stdout: "service/redis-slave created\n" +Jun 18 11:22:34.304: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-master + labels: + app: redis + role: master + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: master + tier: backend + +Jun 18 11:22:34.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:34.504: INFO: stderr: "" +Jun 18 11:22:34.504: INFO: stdout: "service/redis-master created\n" +Jun 18 11:22:34.505: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Jun 18 11:22:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:34.652: INFO: stderr: "" +Jun 18 11:22:34.652: INFO: stdout: "service/frontend created\n" +Jun 18 11:22:34.652: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google-samples/gb-frontend:v6 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access environment variables to find service host + # info, comment out the 'value: dns' line above, and uncomment the + # line below: + # value: env + ports: + - containerPort: 80 + +Jun 18 11:22:34.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:34.849: INFO: stderr: "" +Jun 18 11:22:34.849: INFO: stdout: "deployment.apps/frontend created\n" +Jun 18 11:22:34.849: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-master +spec: + replicas: 1 + selector: + matchLabels: + app: redis + role: master + tier: backend + template: + metadata: + labels: + app: redis + role: master + tier: backend + spec: + containers: + - name: master + image: gcr.io/kubernetes-e2e-test-images/redis:1.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jun 18 11:22:34.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:35.050: INFO: stderr: "" +Jun 18 11:22:35.050: INFO: stdout: "deployment.apps/redis-master created\n" +Jun 18 11:22:35.050: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-slave +spec: + replicas: 2 + selector: + matchLabels: + app: redis + role: slave + tier: backend + template: + metadata: + labels: + app: redis + role: slave + tier: backend + spec: + containers: + - name: slave + image: gcr.io/google-samples/gb-redisslave:v3 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access an environment variable to find the master + # service's host, comment out the 'value: dns' line above, and + # uncomment the line below: + # value: env + ports: + - containerPort: 6379 + +Jun 18 11:22:35.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-3505' +Jun 18 11:22:35.191: INFO: stderr: "" +Jun 18 11:22:35.191: INFO: stdout: "deployment.apps/redis-slave created\n" +STEP: validating guestbook app +Jun 18 11:22:35.191: INFO: Waiting for all frontend pods to be Running. +Jun 18 11:22:40.242: INFO: Waiting for frontend to serve content. +Jun 18 11:22:40.254: INFO: Trying to add a new entry to the guestbook. +Jun 18 11:22:40.270: INFO: Verifying that added entry can be retrieved. +Jun 18 11:22:40.281: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:22:45.295: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:22:50.308: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:22:55.320: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:00.335: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:05.348: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:10.362: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:15.374: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:20.390: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:25.404: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:30.417: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +Jun 18 11:23:35.431: INFO: Failed to get response from guestbook. err: , response: {"data": ""} +STEP: using delete to clean up resources +Jun 18 11:23:40.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.530: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Jun 18 11:23:40.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.615: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 18 11:23:40.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.702: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 18 11:23:40.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.775: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.775: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 18 11:23:40.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.845: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 18 11:23:40.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete --grace-period=0 --force -f - --namespace=kubectl-3505' +Jun 18 11:23:40.915: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 18 11:23:40.915: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:23:40.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3505" for this suite. +Jun 18 11:24:18.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:24:19.043: INFO: namespace kubectl-3505 deletion completed in 38.123706923s + +• [SLOW TEST:105.165 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Guestbook application + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create and stop a working application [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:24:19.043: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:24:19.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-6497" for this suite. +Jun 18 11:24:25.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:24:25.232: INFO: namespace kubelet-test-6497 deletion completed in 6.123372679s + +• [SLOW TEST:6.190 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl version + should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:24:25.232: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:24:25.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 version' +Jun 18 11:24:25.329: INFO: stderr: "" +Jun 18 11:24:25.329: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:44:30Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:36:19Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:24:25.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5280" for this suite. +Jun 18 11:24:31.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:24:31.457: INFO: namespace kubectl-5280 deletion completed in 6.12061577s + +• [SLOW TEST:6.225 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl version + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] HostPath + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:24:31.457: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename hostpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 +[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test hostPath mode +Jun 18 11:24:31.501: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3648" to be "success or failure" +Jun 18 11:24:31.503: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.968142ms +Jun 18 11:24:33.507: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006882573s +STEP: Saw pod success +Jun 18 11:24:33.507: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" +Jun 18 11:24:33.510: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-host-path-test container test-container-1: +STEP: delete the pod +Jun 18 11:24:33.531: INFO: Waiting for pod pod-host-path-test to disappear +Jun 18 11:24:33.533: INFO: Pod pod-host-path-test no longer exists +[AfterEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:24:33.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostpath-3648" for this suite. +Jun 18 11:24:39.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:24:39.661: INFO: namespace hostpath-3648 deletion completed in 6.123699213s + +• [SLOW TEST:8.203 seconds] +[sig-storage] HostPath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:24:39.661: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the initial replication controller +Jun 18 11:24:39.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-6319' +Jun 18 11:24:39.834: INFO: stderr: "" +Jun 18 11:24:39.834: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 11:24:39.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6319' +Jun 18 11:24:39.913: INFO: stderr: "" +Jun 18 11:24:39.913: INFO: stdout: "update-demo-nautilus-hv4cc update-demo-nautilus-kw5zl " +Jun 18 11:24:39.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-hv4cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:24:39.976: INFO: stderr: "" +Jun 18 11:24:39.976: INFO: stdout: "" +Jun 18 11:24:39.976: INFO: update-demo-nautilus-hv4cc is created but not running +Jun 18 11:24:44.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6319' +Jun 18 11:24:45.042: INFO: stderr: "" +Jun 18 11:24:45.042: INFO: stdout: "update-demo-nautilus-hv4cc update-demo-nautilus-kw5zl " +Jun 18 11:24:45.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-hv4cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:24:45.106: INFO: stderr: "" +Jun 18 11:24:45.106: INFO: stdout: "true" +Jun 18 11:24:45.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-hv4cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:24:45.177: INFO: stderr: "" +Jun 18 11:24:45.177: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:24:45.177: INFO: validating pod update-demo-nautilus-hv4cc +Jun 18 11:24:45.182: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:24:45.182: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:24:45.182: INFO: update-demo-nautilus-hv4cc is verified up and running +Jun 18 11:24:45.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-kw5zl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:24:45.248: INFO: stderr: "" +Jun 18 11:24:45.248: INFO: stdout: "true" +Jun 18 11:24:45.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-nautilus-kw5zl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:24:45.312: INFO: stderr: "" +Jun 18 11:24:45.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 18 11:24:45.312: INFO: validating pod update-demo-nautilus-kw5zl +Jun 18 11:24:45.317: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 18 11:24:45.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 18 11:24:45.317: INFO: update-demo-nautilus-kw5zl is verified up and running +STEP: rolling-update to new replication controller +Jun 18 11:24:45.319: INFO: scanned /root for discovery docs: +Jun 18 11:24:45.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6319' +Jun 18 11:25:07.639: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Jun 18 11:25:07.639: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 18 11:25:07.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6319' +Jun 18 11:25:07.707: INFO: stderr: "" +Jun 18 11:25:07.707: INFO: stdout: "update-demo-kitten-n5cq8 update-demo-kitten-snl9f " +Jun 18 11:25:07.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-kitten-n5cq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:25:07.769: INFO: stderr: "" +Jun 18 11:25:07.769: INFO: stdout: "true" +Jun 18 11:25:07.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-kitten-n5cq8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:25:07.879: INFO: stderr: "" +Jun 18 11:25:07.879: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Jun 18 11:25:07.879: INFO: validating pod update-demo-kitten-n5cq8 +Jun 18 11:25:07.903: INFO: got data: { + "image": "kitten.jpg" +} + +Jun 18 11:25:07.903: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Jun 18 11:25:07.903: INFO: update-demo-kitten-n5cq8 is verified up and running +Jun 18 11:25:07.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-kitten-snl9f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:25:07.981: INFO: stderr: "" +Jun 18 11:25:07.981: INFO: stdout: "true" +Jun 18 11:25:07.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pods update-demo-kitten-snl9f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6319' +Jun 18 11:25:08.048: INFO: stderr: "" +Jun 18 11:25:08.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Jun 18 11:25:08.048: INFO: validating pod update-demo-kitten-snl9f +Jun 18 11:25:08.053: INFO: got data: { + "image": "kitten.jpg" +} + +Jun 18 11:25:08.053: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Jun 18 11:25:08.053: INFO: update-demo-kitten-snl9f is verified up and running +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:25:08.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6319" for this suite. +Jun 18 11:25:30.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:25:30.184: INFO: namespace kubectl-6319 deletion completed in 22.127251772s + +• [SLOW TEST:50.524 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:25:30.185: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 18 11:25:30.230: INFO: Waiting up to 5m0s for pod "downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7" in namespace "downward-api-7030" to be "success or failure" +Jun 18 11:25:30.236: INFO: Pod "downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093874ms +Jun 18 11:25:32.240: INFO: Pod "downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010172685s +STEP: Saw pod success +Jun 18 11:25:32.240: INFO: Pod "downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:25:32.243: INFO: Trying to get logs from node ip-172-26-30-38 pod downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 11:25:32.275: INFO: Waiting for pod downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:25:32.278: INFO: Pod downward-api-c76ca2fe-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:25:32.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7030" for this suite. +Jun 18 11:25:38.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:25:38.406: INFO: namespace downward-api-7030 deletion completed in 6.122611301s + +• [SLOW TEST:8.222 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:25:38.406: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-cc52fffa-91bb-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:25:38.457: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7" in namespace "configmap-7318" to be "success or failure" +Jun 18 11:25:38.460: INFO: Pod "pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381799ms +Jun 18 11:25:40.465: INFO: Pod "pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007994452s +STEP: Saw pod success +Jun 18 11:25:40.465: INFO: Pod "pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:25:40.468: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:25:40.487: INFO: Waiting for pod pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:25:40.490: INFO: Pod pod-configmaps-cc541f59-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:25:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7318" for this suite. +Jun 18 11:25:46.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:25:46.616: INFO: namespace configmap-7318 deletion completed in 6.120587742s + +• [SLOW TEST:8.210 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:25:46.616: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-d1379a38-91bb-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume secrets +Jun 18 11:25:46.665: INFO: Waiting up to 5m0s for pod "pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7" in namespace "secrets-2852" to be "success or failure" +Jun 18 11:25:46.670: INFO: Pod "pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.051165ms +Jun 18 11:25:48.674: INFO: Pod "pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009261937s +STEP: Saw pod success +Jun 18 11:25:48.674: INFO: Pod "pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:25:48.677: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7 container secret-volume-test: +STEP: delete the pod +Jun 18 11:25:48.698: INFO: Waiting for pod pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:25:48.704: INFO: Pod pod-secrets-d138bbdd-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:25:48.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2852" for this suite. +Jun 18 11:25:54.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:25:54.844: INFO: namespace secrets-2852 deletion completed in 6.13673171s + +• [SLOW TEST:8.228 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:25:54.844: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:25:54.890: INFO: (0) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.952649ms) +Jun 18 11:25:54.894: INFO: (1) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.651162ms) +Jun 18 11:25:54.897: INFO: (2) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.322233ms) +Jun 18 11:25:54.900: INFO: (3) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.296967ms) +Jun 18 11:25:54.904: INFO: (4) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.368814ms) +Jun 18 11:25:54.907: INFO: (5) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.408687ms) +Jun 18 11:25:54.911: INFO: (6) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.518936ms) +Jun 18 11:25:54.917: INFO: (7) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 5.810857ms) +Jun 18 11:25:54.920: INFO: (8) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.817979ms) +Jun 18 11:25:54.924: INFO: (9) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.623525ms) +Jun 18 11:25:54.928: INFO: (10) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.471719ms) +Jun 18 11:25:54.931: INFO: (11) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.331552ms) +Jun 18 11:25:54.935: INFO: (12) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.64965ms) +Jun 18 11:25:54.938: INFO: (13) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.464679ms) +Jun 18 11:25:54.941: INFO: (14) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.41917ms) +Jun 18 11:25:54.945: INFO: (15) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.740885ms) +Jun 18 11:25:54.949: INFO: (16) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 4.05542ms) +Jun 18 11:25:54.953: INFO: (17) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.455622ms) +Jun 18 11:25:54.956: INFO: (18) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.394466ms) +Jun 18 11:25:54.960: INFO: (19) /api/v1/nodes/ip-172-26-16-178:10250/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.481266ms) +[AfterEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:25:54.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-1324" for this suite. +Jun 18 11:26:00.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:26:01.092: INFO: namespace proxy-1324 deletion completed in 6.129212345s + +• [SLOW TEST:6.248 seconds] +[sig-network] Proxy +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 + should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:26:01.093: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:26:01.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7" in namespace "projected-7002" to be "success or failure" +Jun 18 11:26:01.147: INFO: Pod "downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.887486ms +Jun 18 11:26:03.151: INFO: Pod "downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146599s +Jun 18 11:26:05.155: INFO: Pod "downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012287095s +STEP: Saw pod success +Jun 18 11:26:05.155: INFO: Pod "downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:26:05.158: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:26:05.179: INFO: Waiting for pod downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:26:05.183: INFO: Pod downwardapi-volume-d9d949d4-91bb-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:26:05.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7002" for this suite. +Jun 18 11:26:11.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:26:11.323: INFO: namespace projected-7002 deletion completed in 6.136182142s + +• [SLOW TEST:10.230 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:26:11.323: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +Jun 18 11:26:11.890: INFO: created pod pod-service-account-defaultsa +Jun 18 11:26:11.890: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jun 18 11:26:11.896: INFO: created pod pod-service-account-mountsa +Jun 18 11:26:11.896: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jun 18 11:26:11.903: INFO: created pod pod-service-account-nomountsa +Jun 18 11:26:11.903: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jun 18 11:26:11.910: INFO: created pod pod-service-account-defaultsa-mountspec +Jun 18 11:26:11.910: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jun 18 11:26:11.924: INFO: created pod pod-service-account-mountsa-mountspec +Jun 18 11:26:11.924: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jun 18 11:26:11.930: INFO: created pod pod-service-account-nomountsa-mountspec +Jun 18 11:26:11.930: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jun 18 11:26:11.941: INFO: created pod pod-service-account-defaultsa-nomountspec +Jun 18 11:26:11.941: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jun 18 11:26:11.949: INFO: created pod pod-service-account-mountsa-nomountspec +Jun 18 11:26:11.949: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jun 18 11:26:11.964: INFO: created pod pod-service-account-nomountsa-nomountspec +Jun 18 11:26:11.965: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:26:11.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3507" for this suite. +Jun 18 11:26:18.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:26:18.110: INFO: namespace svcaccounts-3507 deletion completed in 6.128169352s + +• [SLOW TEST:6.787 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:26:18.110: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Jun 18 11:26:18.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39353,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 18 11:26:18.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39353,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Jun 18 11:26:28.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39371,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 18 11:26:28.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39371,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Jun 18 11:26:38.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39389,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 18 11:26:38.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39389,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Jun 18 11:26:48.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39410,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 18 11:26:48.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-a,UID:e3fdf37c-91bb-11e9-8d87-0a902858a792,ResourceVersion:39410,Generation:0,CreationTimestamp:2019-06-18 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Jun 18 11:26:58.191: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-b,UID:fbda8fed-91bb-11e9-8d87-0a902858a792,ResourceVersion:39428,Generation:0,CreationTimestamp:2019-06-18 11:26:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 18 11:26:58.191: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-b,UID:fbda8fed-91bb-11e9-8d87-0a902858a792,ResourceVersion:39428,Generation:0,CreationTimestamp:2019-06-18 11:26:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Jun 18 11:27:08.200: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-b,UID:fbda8fed-91bb-11e9-8d87-0a902858a792,ResourceVersion:39446,Generation:0,CreationTimestamp:2019-06-18 11:26:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 18 11:27:08.200: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1627,SelfLink:/api/v1/namespaces/watch-1627/configmaps/e2e-watch-test-configmap-b,UID:fbda8fed-91bb-11e9-8d87-0a902858a792,ResourceVersion:39446,Generation:0,CreationTimestamp:2019-06-18 11:26:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:27:18.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1627" for this suite. +Jun 18 11:27:24.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:27:24.334: INFO: namespace watch-1627 deletion completed in 6.129207646s + +• [SLOW TEST:66.223 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:27:24.334: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-0b76837c-91bc-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:27:24.385: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7" in namespace "projected-2984" to be "success or failure" +Jun 18 11:27:24.387: INFO: Pod "pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.816001ms +Jun 18 11:27:26.391: INFO: Pod "pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006811132s +STEP: Saw pod success +Jun 18 11:27:26.391: INFO: Pod "pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:27:26.394: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 11:27:26.414: INFO: Waiting for pod pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:27:26.416: INFO: Pod pod-projected-configmaps-0b77879f-91bc-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:27:26.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2984" for this suite. +Jun 18 11:27:32.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:27:32.544: INFO: namespace projected-2984 deletion completed in 6.123790748s + +• [SLOW TEST:8.210 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:27:32.544: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:27:32.587: INFO: (0) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.400293ms) +Jun 18 11:27:32.591: INFO: (1) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.397424ms) +Jun 18 11:27:32.594: INFO: (2) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.366893ms) +Jun 18 11:27:32.597: INFO: (3) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.338797ms) +Jun 18 11:27:32.601: INFO: (4) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.405338ms) +Jun 18 11:27:32.604: INFO: (5) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.405623ms) +Jun 18 11:27:32.608: INFO: (6) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.457926ms) +Jun 18 11:27:32.611: INFO: (7) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.289355ms) +Jun 18 11:27:32.614: INFO: (8) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.411301ms) +Jun 18 11:27:32.618: INFO: (9) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.644609ms) +Jun 18 11:27:32.622: INFO: (10) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.698609ms) +Jun 18 11:27:32.625: INFO: (11) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.20798ms) +Jun 18 11:27:32.629: INFO: (12) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.473702ms) +Jun 18 11:27:32.632: INFO: (13) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.418726ms) +Jun 18 11:27:32.635: INFO: (14) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.243454ms) +Jun 18 11:27:32.639: INFO: (15) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.53997ms) +Jun 18 11:27:32.642: INFO: (16) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.387461ms) +Jun 18 11:27:32.646: INFO: (17) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.812593ms) +Jun 18 11:27:32.650: INFO: (18) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.937997ms) +Jun 18 11:27:32.653: INFO: (19) /api/v1/nodes/ip-172-26-16-178/proxy/logs/:
+containers/
+pods/
+
+ (200; 3.296687ms) +[AfterEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:27:32.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7789" for this suite. +Jun 18 11:27:38.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:27:38.779: INFO: namespace proxy-7789 deletion completed in 6.122024006s + +• [SLOW TEST:6.235 seconds] +[sig-network] Proxy +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:27:38.779: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: validating cluster-info +Jun 18 11:27:38.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 cluster-info' +Jun 18 11:27:38.895: INFO: stderr: "" +Jun 18 11:27:38.895: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.43.0.1:443\x1b[0m\n\x1b[0;32mCoreDNS\x1b[0m is running at \x1b[0;33mhttps://10.43.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:27:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8807" for this suite. +Jun 18 11:27:44.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:27:45.020: INFO: namespace kubectl-8807 deletion completed in 6.121801654s + +• [SLOW TEST:6.241 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl cluster-info + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:27:45.020: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Jun 18 11:27:50.094: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:27:51.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-4237" for this suite. +Jun 18 11:28:13.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:28:13.238: INFO: namespace replicaset-4237 deletion completed in 22.124972694s + +• [SLOW TEST:28.218 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:28:13.238: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-289c8cf9-91bc-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:28:13.288: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7" in namespace "projected-5893" to be "success or failure" +Jun 18 11:28:13.292: INFO: Pod "pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728435ms +Jun 18 11:28:15.297: INFO: Pod "pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009044528s +STEP: Saw pod success +Jun 18 11:28:15.297: INFO: Pod "pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:28:15.300: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 11:28:15.320: INFO: Waiting for pod pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:28:15.323: INFO: Pod pod-projected-configmaps-289d8838-91bc-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:28:15.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5893" for this suite. +Jun 18 11:28:21.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:28:21.465: INFO: namespace projected-5893 deletion completed in 6.138262709s + +• [SLOW TEST:8.227 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:28:21.465: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-2d83df0a-91bc-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:28:21.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7" in namespace "configmap-2928" to be "success or failure" +Jun 18 11:28:21.518: INFO: Pod "pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150997ms +Jun 18 11:28:23.522: INFO: Pod "pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007157467s +Jun 18 11:28:25.526: INFO: Pod "pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011164106s +STEP: Saw pod success +Jun 18 11:28:25.526: INFO: Pod "pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:28:25.529: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:28:25.548: INFO: Waiting for pod pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:28:25.551: INFO: Pod pod-configmaps-2d84e577-91bc-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:28:25.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2928" for this suite. +Jun 18 11:28:31.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:28:31.677: INFO: namespace configmap-2928 deletion completed in 6.122169674s + +• [SLOW TEST:10.212 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:28:31.677: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +W0618 11:29:02.249259 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 11:29:02.249: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:29:02.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8574" for this suite. +Jun 18 11:29:08.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:29:08.376: INFO: namespace gc-8574 deletion completed in 6.124281494s + +• [SLOW TEST:36.699 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:29:08.377: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jun 18 11:29:12.469: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:12.473: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:14.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:14.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:16.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:16.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:18.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:18.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:20.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:20.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:22.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:22.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:24.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:24.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:26.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:26.477: INFO: Pod pod-with-prestop-http-hook still exists +Jun 18 11:29:28.473: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 18 11:29:28.479: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:29:28.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-6723" for this suite. +Jun 18 11:29:50.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:29:50.615: INFO: namespace container-lifecycle-hook-6723 deletion completed in 22.125106095s + +• [SLOW TEST:42.239 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:29:50.616: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:29:50.652: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:29:52.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1297" for this suite. +Jun 18 11:30:30.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:30:30.944: INFO: namespace pods-1297 deletion completed in 38.125334967s + +• [SLOW TEST:40.328 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:30:30.944: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Jun 18 11:30:31.251: INFO: Pod name wrapped-volume-race-7ad82d92-91bc-11e9-8aef-6ab77b36fff7: Found 0 pods out of 5 +Jun 18 11:30:36.259: INFO: Pod name wrapped-volume-race-7ad82d92-91bc-11e9-8aef-6ab77b36fff7: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-7ad82d92-91bc-11e9-8aef-6ab77b36fff7 in namespace emptydir-wrapper-1158, will wait for the garbage collector to delete the pods +Jun 18 11:30:46.360: INFO: Deleting ReplicationController wrapped-volume-race-7ad82d92-91bc-11e9-8aef-6ab77b36fff7 took: 10.968583ms +Jun 18 11:30:46.860: INFO: Terminating ReplicationController wrapped-volume-race-7ad82d92-91bc-11e9-8aef-6ab77b36fff7 pods took: 500.271283ms +STEP: Creating RC which spawns configmap-volume pods +Jun 18 11:31:27.376: INFO: Pod name wrapped-volume-race-9c4bd6a1-91bc-11e9-8aef-6ab77b36fff7: Found 0 pods out of 5 +Jun 18 11:31:32.382: INFO: Pod name wrapped-volume-race-9c4bd6a1-91bc-11e9-8aef-6ab77b36fff7: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9c4bd6a1-91bc-11e9-8aef-6ab77b36fff7 in namespace emptydir-wrapper-1158, will wait for the garbage collector to delete the pods +Jun 18 11:31:42.476: INFO: Deleting ReplicationController wrapped-volume-race-9c4bd6a1-91bc-11e9-8aef-6ab77b36fff7 took: 13.92745ms +Jun 18 11:31:42.976: INFO: Terminating ReplicationController wrapped-volume-race-9c4bd6a1-91bc-11e9-8aef-6ab77b36fff7 pods took: 500.270069ms +STEP: Creating RC which spawns configmap-volume pods +Jun 18 11:32:19.493: INFO: Pod name wrapped-volume-race-bb5c1ebb-91bc-11e9-8aef-6ab77b36fff7: Found 0 pods out of 5 +Jun 18 11:32:24.499: INFO: Pod name wrapped-volume-race-bb5c1ebb-91bc-11e9-8aef-6ab77b36fff7: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-bb5c1ebb-91bc-11e9-8aef-6ab77b36fff7 in namespace emptydir-wrapper-1158, will wait for the garbage collector to delete the pods +Jun 18 11:32:36.591: INFO: Deleting ReplicationController wrapped-volume-race-bb5c1ebb-91bc-11e9-8aef-6ab77b36fff7 took: 9.078379ms +Jun 18 11:32:37.091: INFO: Terminating ReplicationController wrapped-volume-race-bb5c1ebb-91bc-11e9-8aef-6ab77b36fff7 pods took: 500.276147ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:33:17.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-1158" for this suite. +Jun 18 11:33:25.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:33:25.786: INFO: namespace emptydir-wrapper-1158 deletion completed in 8.122211703s + +• [SLOW TEST:174.842 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:33:25.786: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-9201 +Jun 18 11:33:27.840: INFO: Started pod liveness-http in namespace container-probe-9201 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 18 11:33:27.843: INFO: Initial restart count of pod liveness-http is 0 +Jun 18 11:33:45.882: INFO: Restart count of pod container-probe-9201/liveness-http is now 1 (18.039141251s elapsed) +Jun 18 11:34:05.921: INFO: Restart count of pod container-probe-9201/liveness-http is now 2 (38.078548836s elapsed) +Jun 18 11:34:25.961: INFO: Restart count of pod container-probe-9201/liveness-http is now 3 (58.118221264s elapsed) +Jun 18 11:34:46.005: INFO: Restart count of pod container-probe-9201/liveness-http is now 4 (1m18.161620031s elapsed) +Jun 18 11:35:56.152: INFO: Restart count of pod container-probe-9201/liveness-http is now 5 (2m28.309340245s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:35:56.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9201" for this suite. +Jun 18 11:36:02.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:36:02.306: INFO: namespace container-probe-9201 deletion completed in 6.133500914s + +• [SLOW TEST:156.520 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:36:02.307: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-194 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 18 11:36:02.341: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 18 11:36:24.439: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.42.1.192 8081 | grep -v '^\s*$'] Namespace:pod-network-test-194 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:36:24.439: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:36:25.580: INFO: Found all expected endpoints: [netserver-0] +Jun 18 11:36:25.584: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.42.0.202 8081 | grep -v '^\s*$'] Namespace:pod-network-test-194 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:36:25.584: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:36:26.756: INFO: Found all expected endpoints: [netserver-1] +Jun 18 11:36:26.764: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.42.2.187 8081 | grep -v '^\s*$'] Namespace:pod-network-test-194 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:36:26.765: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:36:27.927: INFO: Found all expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:36:27.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-194" for this suite. +Jun 18 11:36:49.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:36:50.057: INFO: namespace pod-network-test-194 deletion completed in 22.125580813s + +• [SLOW TEST:47.750 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:36:50.057: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1619 +[It] should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 11:36:50.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8352' +Jun 18 11:36:50.365: INFO: stderr: "" +Jun 18 11:36:50.365: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod is running +STEP: verifying the pod e2e-test-nginx-pod was created +Jun 18 11:36:55.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 get pod e2e-test-nginx-pod --namespace=kubectl-8352 -o json' +Jun 18 11:36:55.481: INFO: stderr: "" +Jun 18 11:36:55.481: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"10.42.0.203/32\"\n },\n \"creationTimestamp\": \"2019-06-18T11:36:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-8352\",\n \"resourceVersion\": \"41903\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8352/pods/e2e-test-nginx-pod\",\n \"uid\": \"5ccfde00-91bd-11e9-8d87-0a902858a792\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-ndn8q\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"ip-172-26-16-178\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-ndn8q\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-ndn8q\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-18T11:36:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-18T11:36:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-18T11:36:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-18T11:36:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://58e3f020fdea541480417c572cbe4991a9df1c0d3d7976ae9d7a22da5a08ac3d\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-06-18T11:36:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.26.16.178\",\n \"phase\": \"Running\",\n \"podIP\": \"10.42.0.203\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-06-18T11:36:50Z\"\n }\n}\n" +STEP: replace the image in the pod +Jun 18 11:36:55.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 replace -f - --namespace=kubectl-8352' +Jun 18 11:36:55.623: INFO: stderr: "" +Jun 18 11:36:55.623: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" +STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 +[AfterEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1624 +Jun 18 11:36:55.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete pods e2e-test-nginx-pod --namespace=kubectl-8352' +Jun 18 11:37:07.215: INFO: stderr: "" +Jun 18 11:37:07.215: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:37:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8352" for this suite. +Jun 18 11:37:13.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:37:13.349: INFO: namespace kubectl-8352 deletion completed in 6.129119999s + +• [SLOW TEST:23.292 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:37:13.350: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap that has name configmap-test-emptyKey-6a8ae200-91bd-11e9-8aef-6ab77b36fff7 +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:37:13.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1971" for this suite. +Jun 18 11:37:19.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:37:19.511: INFO: namespace configmap-1971 deletion completed in 6.122207932s + +• [SLOW TEST:6.162 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl expose + should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:37:19.511: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating Redis RC +Jun 18 11:37:19.585: INFO: namespace kubectl-7088 +Jun 18 11:37:19.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-7088' +Jun 18 11:37:19.772: INFO: stderr: "" +Jun 18 11:37:19.772: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 18 11:37:20.776: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:37:20.776: INFO: Found 0 / 1 +Jun 18 11:37:21.776: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:37:21.776: INFO: Found 1 / 1 +Jun 18 11:37:21.776: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 18 11:37:21.780: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:37:21.780: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 18 11:37:21.780: INFO: wait on redis-master startup in kubectl-7088 +Jun 18 11:37:21.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 logs redis-master-98cwg redis-master --namespace=kubectl-7088' +Jun 18 11:37:21.857: INFO: stderr: "" +Jun 18 11:37:21.857: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Jun 11:37:20.873 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Jun 11:37:20.873 # Server started, Redis version 3.2.12\n1:M 18 Jun 11:37:20.873 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Jun 11:37:20.873 * The server is now ready to accept connections on port 6379\n" +STEP: exposing RC +Jun 18 11:37:21.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7088' +Jun 18 11:37:21.945: INFO: stderr: "" +Jun 18 11:37:21.945: INFO: stdout: "service/rm2 exposed\n" +Jun 18 11:37:21.949: INFO: Service rm2 in namespace kubectl-7088 found. +STEP: exposing service +Jun 18 11:37:23.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7088' +Jun 18 11:37:24.056: INFO: stderr: "" +Jun 18 11:37:24.056: INFO: stdout: "service/rm3 exposed\n" +Jun 18 11:37:24.062: INFO: Service rm3 in namespace kubectl-7088 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:37:26.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7088" for this suite. +Jun 18 11:37:48.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:37:48.202: INFO: namespace kubectl-7088 deletion completed in 22.127231724s + +• [SLOW TEST:28.691 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl expose + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:37:48.202: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:37:48.240: INFO: Creating deployment "nginx-deployment" +Jun 18 11:37:48.248: INFO: Waiting for observed generation 1 +Jun 18 11:37:50.259: INFO: Waiting for all required pods to come up +Jun 18 11:37:50.263: INFO: Pod name nginx: Found 10 pods out of 10 +STEP: ensuring each pod is running +Jun 18 11:37:50.263: INFO: Waiting for deployment "nginx-deployment" to complete +Jun 18 11:37:50.271: INFO: Updating deployment "nginx-deployment" with a non-existent image +Jun 18 11:37:50.278: INFO: Updating deployment nginx-deployment +Jun 18 11:37:50.278: INFO: Waiting for observed generation 2 +Jun 18 11:37:52.285: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Jun 18 11:37:52.288: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Jun 18 11:37:52.290: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Jun 18 11:37:52.300: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Jun 18 11:37:52.300: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Jun 18 11:37:52.303: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Jun 18 11:37:52.309: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas +Jun 18 11:37:52.309: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 +Jun 18 11:37:52.317: INFO: Updating deployment nginx-deployment +Jun 18 11:37:52.317: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas +Jun 18 11:37:52.324: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Jun 18 11:37:52.326: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 18 11:37:54.336: INFO: Deployment "nginx-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4147,SelfLink:/apis/apps/v1/namespaces/deployment-4147/deployments/nginx-deployment,UID:7f518857-91bd-11e9-8d87-0a902858a792,ResourceVersion:42411,Generation:3,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-06-18 11:37:52 +0000 UTC 2019-06-18 11:37:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-18 11:37:52 +0000 UTC 2019-06-18 11:37:48 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5f9595f595" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} + +Jun 18 11:37:54.340: INFO: New ReplicaSet "nginx-deployment-5f9595f595" of Deployment "nginx-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595,GenerateName:,Namespace:deployment-4147,SelfLink:/apis/apps/v1/namespaces/deployment-4147/replicasets/nginx-deployment-5f9595f595,UID:80888f8f-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42410,Generation:3,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7f518857-91bd-11e9-8d87-0a902858a792 0xc002f3d0c7 0xc002f3d0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 11:37:54.340: INFO: All old ReplicaSets of Deployment "nginx-deployment": +Jun 18 11:37:54.340: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8,GenerateName:,Namespace:deployment-4147,SelfLink:/apis/apps/v1/namespaces/deployment-4147/replicasets/nginx-deployment-6f478d8d8,UID:7f52bb1b-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42384,Generation:3,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7f518857-91bd-11e9-8d87-0a902858a792 0xc002f3d197 0xc002f3d198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-55pzd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-55pzd,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-55pzd,UID:808a5449-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42431,Generation:0,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.207/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc002f3db07 0xc002f3db08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f3db80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f3dba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.207,StartTime:2019-06-18 11:37:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-5s7g4" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-5s7g4,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-5s7g4,UID:81c54ff1-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42485,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.211/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc002f3dca0 0xc002f3dca1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f3dd20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f3dd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.211,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-b46k7" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-b46k7,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-b46k7,UID:81c6b4a4-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42486,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.202/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc002f3de40 0xc002f3de41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f3dec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f3dee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-hv8ng" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-hv8ng,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-hv8ng,UID:808a89e4-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42323,Generation:0,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.197/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc002f3dfc0 0xc002f3dfc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc040} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.197,StartTime:2019-06-18 11:37:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-j2hff" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-j2hff,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-j2hff,UID:8092b8e5-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42441,Generation:0,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.195/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc160 0xc0027dc161}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-nkcwv" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-nkcwv,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-nkcwv,UID:81c34bf7-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42491,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.208/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc2f0 0xc0027dc2f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc370} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.208,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.346: INFO: Pod "nginx-deployment-5f9595f595-qf6lx" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-qf6lx,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-qf6lx,UID:81c6a6be-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42476,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.203/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc490 0xc0027dc491}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc510} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-spgwx" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-spgwx,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-spgwx,UID:81c68cc8-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42483,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.199/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc610 0xc0027dc611}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-sqp44" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-sqp44,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-sqp44,UID:81c6b023-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42462,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.212/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc7a0 0xc0027dc7a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-tn8t2" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-tn8t2,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-tn8t2,UID:80942bb5-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42320,Generation:0,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.198/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dc920 0xc0027dc921}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dc9a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dc9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.198,StartTime:2019-06-18 11:37:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-txsxv" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-txsxv,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-txsxv,UID:81ca2a40-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42499,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.205/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dcac0 0xc0027dcac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dcb40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dcb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-wp5cl" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-wp5cl,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-wp5cl,UID:81c54f72-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42445,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.199/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dcc40 0xc0027dcc41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dccc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dcce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-5f9595f595-xptmq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-xptmq,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-5f9595f595-xptmq,UID:80894678-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42317,Generation:0,CreationTimestamp:2019-06-18 11:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.193/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 80888f8f-91bd-11e9-8999-0a07e7e61ed8 0xc0027dcdc0 0xc0027dcdc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dce40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dce60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:10.42.2.193,StartTime:2019-06-18 11:37:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-6f478d8d8-5gsnc" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-5gsnc,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-5gsnc,UID:7f5819c6-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42231,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.196/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dcf60 0xc0027dcf61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dcfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dcff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.196,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1c78628fca1b59c9e730f1518f4cf1adbcc7be27d9a5ebda92b9bf31edd40dbf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-6f478d8d8-7ghq8" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-7ghq8,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-7ghq8,UID:81c552b8-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42488,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.200/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd0d0 0xc0027dd0d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd140} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-6f478d8d8-9pkd8" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-9pkd8,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-9pkd8,UID:81c273bc-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42446,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.196/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd230 0xc0027dd231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.347: INFO: Pod "nginx-deployment-6f478d8d8-b2cbw" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-b2cbw,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-b2cbw,UID:7f567a9e-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42208,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.205/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd390 0xc0027dd391}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.205,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://69b68b3e5cf2e4403c2cd9c9486b0d3fe49d76de936805b368786a7f95a080bb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-b2xm9" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-b2xm9,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-b2xm9,UID:7f580661-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42202,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.206/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd500 0xc0027dd501}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd570} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.206,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7e3e4f82d179ccbd16b12d21306f4721f47ecaee48994a98da4477f6ae8c5730}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-gxxx6" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-gxxx6,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-gxxx6,UID:81c70e36-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42467,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.213/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd670 0xc0027dd671}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-hcv8h" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-hcv8h,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-hcv8h,UID:7f5683b2-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42219,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.192/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd7d7 0xc0027dd7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:10.42.2.192,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9111d60d8dc3b9d3b833b4f57314197dcbcd04c233644036539402d9502c261b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-hsqkw" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-hsqkw,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-hsqkw,UID:81c4f844-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42465,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.197/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027dd950 0xc0027dd951}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dd9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dd9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-lq8s8" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-lq8s8,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-lq8s8,UID:81c72138-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42490,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.201/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027ddab0 0xc0027ddab1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027ddb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027ddb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-m8l7q" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-m8l7q,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-m8l7q,UID:81c533a8-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42477,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.204/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027ddc10 0xc0027ddc11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027ddc80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027ddca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-ml7fd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-ml7fd,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-ml7fd,UID:81c71377-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42482,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.198/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027ddd77 0xc0027ddd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027dddf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027dde10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-mvr8c" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-mvr8c,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-mvr8c,UID:81c39662-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42442,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.210/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc0027ddee0 0xc0027ddee1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027ddf50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027ddf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.348: INFO: Pod "nginx-deployment-6f478d8d8-n99lp" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-n99lp,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-n99lp,UID:7f57f620-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42213,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.2.191/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8047 0xc002fa8048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-17-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa80c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa81b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.17.1,PodIP:10.42.2.191,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3e2b7ab6c6728a7e8f583e70ef378116f2caf2e64455c1de94991ca0441fdaf5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-qsk64" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-qsk64,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-qsk64,UID:7f569943-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42222,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.195/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa82b0 0xc002fa82b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa8340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.195,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bb569d4a1e03a8f5968a95233884006c33fee009cf6accf1f5a92add41608f72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-vlrjd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-vlrjd,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-vlrjd,UID:81c373a2-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42444,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.200/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa84e0 0xc002fa84e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa85b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-w86bq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-w86bq,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-w86bq,UID:81c721d9-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42478,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.202/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8687 0xc002fa8688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa8730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-x25wd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-x25wd,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-x25wd,UID:81c55c8d-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42439,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.209/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8817 0xc002fa8818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa88a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa88c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-xxfvk" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-xxfvk,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-xxfvk,UID:7f56c425-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42225,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.194/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8997 0xc002fa8998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa8a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:10.42.1.194,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2d98b0e66b5fe6b5c4a6f52197da3665f1fe33c9b59d1ea6ab474e4ad2940a38}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-zjtrh" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zjtrh,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-zjtrh,UID:7f558113-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42205,Generation:0,CreationTimestamp:2019-06-18 11:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.204/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8b30 0xc002fa8b31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa8bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.204,StartTime:2019-06-18 11:37:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-18 11:37:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://65291139bd19087f4214ed5936730b8c93fe89f515728a57415da2ace76b5dd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 18 11:37:54.349: INFO: Pod "nginx-deployment-6f478d8d8-zvpjg" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zvpjg,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-4147,SelfLink:/api/v1/namespaces/deployment-4147/pods/nginx-deployment-6f478d8d8-zvpjg,UID:81c6ec17-91bd-11e9-8999-0a07e7e61ed8,ResourceVersion:42460,Generation:0,CreationTimestamp:2019-06-18 11:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.1.201/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 7f52bb1b-91bd-11e9-8999-0a07e7e61ed8 0xc002fa8cc0 0xc002fa8cc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bh8mc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bh8mc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bh8mc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-30-38,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fa8d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fa8d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.26.30.38,PodIP:,StartTime:2019-06-18 11:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:37:54.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4147" for this suite. +Jun 18 11:38:02.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:38:02.475: INFO: namespace deployment-4147 deletion completed in 8.121615684s + +• [SLOW TEST:14.272 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:38:02.475: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-87d39428-91bd-11e9-8aef-6ab77b36fff7 +STEP: Creating configMap with name cm-test-opt-upd-87d39466-91bd-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-87d39428-91bd-11e9-8aef-6ab77b36fff7 +STEP: Updating configmap cm-test-opt-upd-87d39466-91bd-11e9-8aef-6ab77b36fff7 +STEP: Creating configMap with name cm-test-opt-create-87d39488-91bd-11e9-8aef-6ab77b36fff7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:38:06.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5613" for this suite. +Jun 18 11:38:28.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:38:28.744: INFO: namespace projected-5613 deletion completed in 22.125659967s + +• [SLOW TEST:26.269 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:38:28.744: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:38:32.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-7288" for this suite. +Jun 18 11:38:38.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:38:38.926: INFO: namespace kubelet-test-7288 deletion completed in 6.122698696s + +• [SLOW TEST:10.182 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:38:38.926: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 18 11:38:38.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7" in namespace "projected-6811" to be "success or failure" +Jun 18 11:38:38.976: INFO: Pod "downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.408295ms +Jun 18 11:38:40.980: INFO: Pod "downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009856309s +STEP: Saw pod success +Jun 18 11:38:40.980: INFO: Pod "downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:38:40.983: INFO: Trying to get logs from node ip-172-26-30-38 pod downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7 container client-container: +STEP: delete the pod +Jun 18 11:38:41.006: INFO: Waiting for pod downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:38:41.010: INFO: Pod downwardapi-volume-9d8cb734-91bd-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:38:41.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6811" for this suite. +Jun 18 11:38:47.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:38:47.138: INFO: namespace projected-6811 deletion completed in 6.122319533s + +• [SLOW TEST:8.213 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run deployment + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:38:47.139: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1455 +[It] should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 11:38:47.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=kubectl-4190' +Jun 18 11:38:47.248: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 18 11:38:47.248: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" +STEP: verifying the deployment e2e-test-nginx-deployment was created +STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created +[AfterEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 +Jun 18 11:38:49.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete deployment e2e-test-nginx-deployment --namespace=kubectl-4190' +Jun 18 11:38:49.332: INFO: stderr: "" +Jun 18 11:38:49.332: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:38:49.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4190" for this suite. +Jun 18 11:39:11.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:39:11.457: INFO: namespace kubectl-4190 deletion completed in 22.12083991s + +• [SLOW TEST:24.319 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:39:11.458: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-2247 +I0618 11:39:11.499743 14 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2247, replica count: 1 +I0618 11:39:12.550274 14 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0618 11:39:13.550540 14 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 18 11:39:13.665: INFO: Created: latency-svc-psjlw +Jun 18 11:39:13.670: INFO: Got endpoints: latency-svc-psjlw [19.566021ms] +Jun 18 11:39:13.684: INFO: Created: latency-svc-7rp2t +Jun 18 11:39:13.690: INFO: Got endpoints: latency-svc-7rp2t [20.153492ms] +Jun 18 11:39:13.694: INFO: Created: latency-svc-cfjkl +Jun 18 11:39:13.702: INFO: Got endpoints: latency-svc-cfjkl [31.693499ms] +Jun 18 11:39:13.703: INFO: Created: latency-svc-mrqng +Jun 18 11:39:13.711: INFO: Got endpoints: latency-svc-mrqng [40.551205ms] +Jun 18 11:39:13.711: INFO: Created: latency-svc-ztb62 +Jun 18 11:39:13.718: INFO: Got endpoints: latency-svc-ztb62 [47.743603ms] +Jun 18 11:39:13.719: INFO: Created: latency-svc-5v8lf +Jun 18 11:39:13.725: INFO: Created: latency-svc-sm78q +Jun 18 11:39:13.727: INFO: Got endpoints: latency-svc-5v8lf [57.104075ms] +Jun 18 11:39:13.733: INFO: Created: latency-svc-cj6mk +Jun 18 11:39:13.736: INFO: Got endpoints: latency-svc-sm78q [65.871808ms] +Jun 18 11:39:13.740: INFO: Created: latency-svc-bw4h8 +Jun 18 11:39:13.751: INFO: Got endpoints: latency-svc-cj6mk [80.250755ms] +Jun 18 11:39:13.752: INFO: Got endpoints: latency-svc-bw4h8 [81.926439ms] +Jun 18 11:39:13.752: INFO: Created: latency-svc-c22ct +Jun 18 11:39:13.761: INFO: Created: latency-svc-f4hj5 +Jun 18 11:39:13.762: INFO: Got endpoints: latency-svc-c22ct [91.495549ms] +Jun 18 11:39:13.767: INFO: Got endpoints: latency-svc-f4hj5 [96.128167ms] +Jun 18 11:39:13.769: INFO: Created: latency-svc-4ddz5 +Jun 18 11:39:13.775: INFO: Got endpoints: latency-svc-4ddz5 [104.559556ms] +Jun 18 11:39:13.776: INFO: Created: latency-svc-mwt47 +Jun 18 11:39:13.784: INFO: Created: latency-svc-pwd7m +Jun 18 11:39:13.785: INFO: Got endpoints: latency-svc-mwt47 [114.544703ms] +Jun 18 11:39:13.793: INFO: Got endpoints: latency-svc-pwd7m [122.199254ms] +Jun 18 11:39:13.793: INFO: Created: latency-svc-lnkmm +Jun 18 11:39:13.799: INFO: Got endpoints: latency-svc-lnkmm [128.87845ms] +Jun 18 11:39:13.801: INFO: Created: latency-svc-frhp9 +Jun 18 11:39:13.807: INFO: Got endpoints: latency-svc-frhp9 [136.643247ms] +Jun 18 11:39:13.811: INFO: Created: latency-svc-kkqc8 +Jun 18 11:39:13.818: INFO: Got endpoints: latency-svc-kkqc8 [127.976608ms] +Jun 18 11:39:13.820: INFO: Created: latency-svc-zdxgd +Jun 18 11:39:13.828: INFO: Got endpoints: latency-svc-zdxgd [125.734734ms] +Jun 18 11:39:13.830: INFO: Created: latency-svc-ntb9c +Jun 18 11:39:13.836: INFO: Got endpoints: latency-svc-ntb9c [125.097189ms] +Jun 18 11:39:13.839: INFO: Created: latency-svc-69gqq +Jun 18 11:39:13.845: INFO: Got endpoints: latency-svc-69gqq [127.0236ms] +Jun 18 11:39:13.846: INFO: Created: latency-svc-hs6nv +Jun 18 11:39:13.853: INFO: Got endpoints: latency-svc-hs6nv [125.642382ms] +Jun 18 11:39:13.854: INFO: Created: latency-svc-gv9q6 +Jun 18 11:39:13.861: INFO: Got endpoints: latency-svc-gv9q6 [124.519319ms] +Jun 18 11:39:13.862: INFO: Created: latency-svc-ss2tv +Jun 18 11:39:13.868: INFO: Got endpoints: latency-svc-ss2tv [116.902633ms] +Jun 18 11:39:13.869: INFO: Created: latency-svc-tlxpk +Jun 18 11:39:13.877: INFO: Got endpoints: latency-svc-tlxpk [124.519238ms] +Jun 18 11:39:13.878: INFO: Created: latency-svc-f5vfc +Jun 18 11:39:13.886: INFO: Got endpoints: latency-svc-f5vfc [124.152089ms] +Jun 18 11:39:13.887: INFO: Created: latency-svc-dh5jl +Jun 18 11:39:13.900: INFO: Got endpoints: latency-svc-dh5jl [133.800867ms] +Jun 18 11:39:13.903: INFO: Created: latency-svc-h294v +Jun 18 11:39:13.910: INFO: Got endpoints: latency-svc-h294v [134.603411ms] +Jun 18 11:39:13.913: INFO: Created: latency-svc-xbwzg +Jun 18 11:39:13.919: INFO: Got endpoints: latency-svc-xbwzg [133.5875ms] +Jun 18 11:39:13.921: INFO: Created: latency-svc-vxjgp +Jun 18 11:39:13.927: INFO: Got endpoints: latency-svc-vxjgp [133.919699ms] +Jun 18 11:39:13.928: INFO: Created: latency-svc-xmpgg +Jun 18 11:39:13.941: INFO: Got endpoints: latency-svc-xmpgg [141.307235ms] +Jun 18 11:39:13.945: INFO: Created: latency-svc-w89x9 +Jun 18 11:39:14.018: INFO: Got endpoints: latency-svc-w89x9 [211.141468ms] +Jun 18 11:39:14.020: INFO: Created: latency-svc-jp96v +Jun 18 11:39:14.027: INFO: Got endpoints: latency-svc-jp96v [208.514592ms] +Jun 18 11:39:14.029: INFO: Created: latency-svc-94fnw +Jun 18 11:39:14.038: INFO: Created: latency-svc-grxzf +Jun 18 11:39:14.038: INFO: Got endpoints: latency-svc-94fnw [210.815831ms] +Jun 18 11:39:14.047: INFO: Created: latency-svc-cdf9w +Jun 18 11:39:14.047: INFO: Got endpoints: latency-svc-grxzf [211.178623ms] +Jun 18 11:39:14.056: INFO: Got endpoints: latency-svc-cdf9w [211.316075ms] +Jun 18 11:39:14.057: INFO: Created: latency-svc-qb5f5 +Jun 18 11:39:14.064: INFO: Got endpoints: latency-svc-qb5f5 [210.48311ms] +Jun 18 11:39:14.065: INFO: Created: latency-svc-wpdcv +Jun 18 11:39:14.071: INFO: Got endpoints: latency-svc-wpdcv [210.312256ms] +Jun 18 11:39:14.072: INFO: Created: latency-svc-55g7b +Jun 18 11:39:14.080: INFO: Created: latency-svc-92t5q +Jun 18 11:39:14.080: INFO: Got endpoints: latency-svc-55g7b [212.014585ms] +Jun 18 11:39:14.086: INFO: Created: latency-svc-tl9wv +Jun 18 11:39:14.094: INFO: Created: latency-svc-bggb6 +Jun 18 11:39:14.105: INFO: Created: latency-svc-jpv6q +Jun 18 11:39:14.112: INFO: Created: latency-svc-msxd5 +Jun 18 11:39:14.120: INFO: Created: latency-svc-rwm7w +Jun 18 11:39:14.121: INFO: Got endpoints: latency-svc-92t5q [243.850053ms] +Jun 18 11:39:14.130: INFO: Created: latency-svc-7dv7k +Jun 18 11:39:14.139: INFO: Created: latency-svc-htwv5 +Jun 18 11:39:14.146: INFO: Created: latency-svc-dghzd +Jun 18 11:39:14.155: INFO: Created: latency-svc-fm2bd +Jun 18 11:39:14.164: INFO: Created: latency-svc-sjfvw +Jun 18 11:39:14.170: INFO: Got endpoints: latency-svc-tl9wv [283.95975ms] +Jun 18 11:39:14.173: INFO: Created: latency-svc-fz96b +Jun 18 11:39:14.181: INFO: Created: latency-svc-c678m +Jun 18 11:39:14.195: INFO: Created: latency-svc-p8ggr +Jun 18 11:39:14.205: INFO: Created: latency-svc-tzn7b +Jun 18 11:39:14.215: INFO: Created: latency-svc-nvq29 +Jun 18 11:39:14.220: INFO: Got endpoints: latency-svc-bggb6 [319.180469ms] +Jun 18 11:39:14.222: INFO: Created: latency-svc-4sslf +Jun 18 11:39:14.231: INFO: Created: latency-svc-dmn9d +Jun 18 11:39:14.271: INFO: Got endpoints: latency-svc-jpv6q [361.248325ms] +Jun 18 11:39:14.282: INFO: Created: latency-svc-r9hfh +Jun 18 11:39:14.322: INFO: Got endpoints: latency-svc-msxd5 [402.928562ms] +Jun 18 11:39:14.333: INFO: Created: latency-svc-g4hp6 +Jun 18 11:39:14.370: INFO: Got endpoints: latency-svc-rwm7w [443.465457ms] +Jun 18 11:39:14.382: INFO: Created: latency-svc-vk2vd +Jun 18 11:39:14.421: INFO: Got endpoints: latency-svc-7dv7k [479.829646ms] +Jun 18 11:39:14.432: INFO: Created: latency-svc-6z4vp +Jun 18 11:39:14.471: INFO: Got endpoints: latency-svc-htwv5 [452.349486ms] +Jun 18 11:39:14.482: INFO: Created: latency-svc-x8gbc +Jun 18 11:39:14.521: INFO: Got endpoints: latency-svc-dghzd [494.172974ms] +Jun 18 11:39:14.548: INFO: Created: latency-svc-8zzz5 +Jun 18 11:39:14.570: INFO: Got endpoints: latency-svc-fm2bd [531.870947ms] +Jun 18 11:39:14.582: INFO: Created: latency-svc-pbllg +Jun 18 11:39:14.620: INFO: Got endpoints: latency-svc-sjfvw [573.349792ms] +Jun 18 11:39:14.633: INFO: Created: latency-svc-pl2bs +Jun 18 11:39:14.672: INFO: Got endpoints: latency-svc-fz96b [615.593247ms] +Jun 18 11:39:14.688: INFO: Created: latency-svc-rlsgd +Jun 18 11:39:14.721: INFO: Got endpoints: latency-svc-c678m [656.942142ms] +Jun 18 11:39:14.731: INFO: Created: latency-svc-6g2vg +Jun 18 11:39:14.771: INFO: Got endpoints: latency-svc-p8ggr [699.38579ms] +Jun 18 11:39:14.782: INFO: Created: latency-svc-6nnz9 +Jun 18 11:39:14.820: INFO: Got endpoints: latency-svc-tzn7b [740.783781ms] +Jun 18 11:39:14.832: INFO: Created: latency-svc-kss9x +Jun 18 11:39:14.870: INFO: Got endpoints: latency-svc-nvq29 [749.754437ms] +Jun 18 11:39:14.881: INFO: Created: latency-svc-7x8vm +Jun 18 11:39:14.921: INFO: Got endpoints: latency-svc-4sslf [750.464628ms] +Jun 18 11:39:14.932: INFO: Created: latency-svc-chhmv +Jun 18 11:39:14.975: INFO: Got endpoints: latency-svc-dmn9d [755.044415ms] +Jun 18 11:39:14.987: INFO: Created: latency-svc-kpsws +Jun 18 11:39:15.020: INFO: Got endpoints: latency-svc-r9hfh [749.248625ms] +Jun 18 11:39:15.032: INFO: Created: latency-svc-gdv8g +Jun 18 11:39:15.070: INFO: Got endpoints: latency-svc-g4hp6 [748.595259ms] +Jun 18 11:39:15.082: INFO: Created: latency-svc-jt26q +Jun 18 11:39:15.120: INFO: Got endpoints: latency-svc-vk2vd [750.237924ms] +Jun 18 11:39:15.132: INFO: Created: latency-svc-4xhbw +Jun 18 11:39:15.171: INFO: Got endpoints: latency-svc-6z4vp [750.251511ms] +Jun 18 11:39:15.183: INFO: Created: latency-svc-ck947 +Jun 18 11:39:15.221: INFO: Got endpoints: latency-svc-x8gbc [749.873224ms] +Jun 18 11:39:15.232: INFO: Created: latency-svc-c48z9 +Jun 18 11:39:15.271: INFO: Got endpoints: latency-svc-8zzz5 [749.892941ms] +Jun 18 11:39:15.282: INFO: Created: latency-svc-9pdsz +Jun 18 11:39:15.329: INFO: Got endpoints: latency-svc-pbllg [758.988175ms] +Jun 18 11:39:15.341: INFO: Created: latency-svc-59n7f +Jun 18 11:39:15.370: INFO: Got endpoints: latency-svc-pl2bs [749.82352ms] +Jun 18 11:39:15.382: INFO: Created: latency-svc-pf78w +Jun 18 11:39:15.421: INFO: Got endpoints: latency-svc-rlsgd [748.517726ms] +Jun 18 11:39:15.432: INFO: Created: latency-svc-9ptth +Jun 18 11:39:15.470: INFO: Got endpoints: latency-svc-6g2vg [749.528705ms] +Jun 18 11:39:15.484: INFO: Created: latency-svc-2p4tn +Jun 18 11:39:15.520: INFO: Got endpoints: latency-svc-6nnz9 [749.556466ms] +Jun 18 11:39:15.532: INFO: Created: latency-svc-7bv9m +Jun 18 11:39:15.570: INFO: Got endpoints: latency-svc-kss9x [749.458664ms] +Jun 18 11:39:15.582: INFO: Created: latency-svc-vxg9x +Jun 18 11:39:15.620: INFO: Got endpoints: latency-svc-7x8vm [749.571272ms] +Jun 18 11:39:15.631: INFO: Created: latency-svc-b67x7 +Jun 18 11:39:15.670: INFO: Got endpoints: latency-svc-chhmv [749.403413ms] +Jun 18 11:39:15.682: INFO: Created: latency-svc-nkx2l +Jun 18 11:39:15.721: INFO: Got endpoints: latency-svc-kpsws [745.977828ms] +Jun 18 11:39:15.732: INFO: Created: latency-svc-gzvlq +Jun 18 11:39:15.771: INFO: Got endpoints: latency-svc-gdv8g [750.666529ms] +Jun 18 11:39:15.781: INFO: Created: latency-svc-kn67d +Jun 18 11:39:15.820: INFO: Got endpoints: latency-svc-jt26q [749.936358ms] +Jun 18 11:39:15.832: INFO: Created: latency-svc-8s4cp +Jun 18 11:39:15.870: INFO: Got endpoints: latency-svc-4xhbw [749.409935ms] +Jun 18 11:39:15.881: INFO: Created: latency-svc-vs8fl +Jun 18 11:39:15.920: INFO: Got endpoints: latency-svc-ck947 [749.465911ms] +Jun 18 11:39:15.933: INFO: Created: latency-svc-8pkc2 +Jun 18 11:39:15.970: INFO: Got endpoints: latency-svc-c48z9 [749.174367ms] +Jun 18 11:39:15.981: INFO: Created: latency-svc-jbffp +Jun 18 11:39:16.021: INFO: Got endpoints: latency-svc-9pdsz [750.386184ms] +Jun 18 11:39:16.033: INFO: Created: latency-svc-nztkn +Jun 18 11:39:16.070: INFO: Got endpoints: latency-svc-59n7f [740.674709ms] +Jun 18 11:39:16.082: INFO: Created: latency-svc-z8dc7 +Jun 18 11:39:16.122: INFO: Got endpoints: latency-svc-pf78w [751.288535ms] +Jun 18 11:39:16.135: INFO: Created: latency-svc-44jxt +Jun 18 11:39:16.171: INFO: Got endpoints: latency-svc-9ptth [750.269272ms] +Jun 18 11:39:16.182: INFO: Created: latency-svc-wh848 +Jun 18 11:39:16.220: INFO: Got endpoints: latency-svc-2p4tn [750.041312ms] +Jun 18 11:39:16.241: INFO: Created: latency-svc-fwkvs +Jun 18 11:39:16.272: INFO: Got endpoints: latency-svc-7bv9m [751.81716ms] +Jun 18 11:39:16.284: INFO: Created: latency-svc-w9n84 +Jun 18 11:39:16.320: INFO: Got endpoints: latency-svc-vxg9x [750.380087ms] +Jun 18 11:39:16.332: INFO: Created: latency-svc-8cbzj +Jun 18 11:39:16.372: INFO: Got endpoints: latency-svc-b67x7 [751.623729ms] +Jun 18 11:39:16.383: INFO: Created: latency-svc-zpqq6 +Jun 18 11:39:16.423: INFO: Got endpoints: latency-svc-nkx2l [752.713478ms] +Jun 18 11:39:16.435: INFO: Created: latency-svc-tljgm +Jun 18 11:39:16.470: INFO: Got endpoints: latency-svc-gzvlq [749.393503ms] +Jun 18 11:39:16.482: INFO: Created: latency-svc-q884h +Jun 18 11:39:16.522: INFO: Got endpoints: latency-svc-kn67d [751.171266ms] +Jun 18 11:39:16.543: INFO: Created: latency-svc-hn8fw +Jun 18 11:39:16.570: INFO: Got endpoints: latency-svc-8s4cp [749.957871ms] +Jun 18 11:39:16.582: INFO: Created: latency-svc-mfrv9 +Jun 18 11:39:16.620: INFO: Got endpoints: latency-svc-vs8fl [750.514277ms] +Jun 18 11:39:16.632: INFO: Created: latency-svc-8qfjd +Jun 18 11:39:16.671: INFO: Got endpoints: latency-svc-8pkc2 [750.693945ms] +Jun 18 11:39:16.683: INFO: Created: latency-svc-z9lfw +Jun 18 11:39:16.720: INFO: Got endpoints: latency-svc-jbffp [750.460726ms] +Jun 18 11:39:16.733: INFO: Created: latency-svc-gkvzd +Jun 18 11:39:16.770: INFO: Got endpoints: latency-svc-nztkn [748.716382ms] +Jun 18 11:39:16.783: INFO: Created: latency-svc-6b6h9 +Jun 18 11:39:16.821: INFO: Got endpoints: latency-svc-z8dc7 [750.607985ms] +Jun 18 11:39:16.835: INFO: Created: latency-svc-qsgwp +Jun 18 11:39:16.870: INFO: Got endpoints: latency-svc-44jxt [748.164662ms] +Jun 18 11:39:16.882: INFO: Created: latency-svc-5j275 +Jun 18 11:39:16.920: INFO: Got endpoints: latency-svc-wh848 [749.498747ms] +Jun 18 11:39:16.931: INFO: Created: latency-svc-wn52s +Jun 18 11:39:16.971: INFO: Got endpoints: latency-svc-fwkvs [750.378681ms] +Jun 18 11:39:16.982: INFO: Created: latency-svc-zzndh +Jun 18 11:39:17.020: INFO: Got endpoints: latency-svc-w9n84 [748.23139ms] +Jun 18 11:39:17.033: INFO: Created: latency-svc-h79v5 +Jun 18 11:39:17.071: INFO: Got endpoints: latency-svc-8cbzj [750.595385ms] +Jun 18 11:39:17.083: INFO: Created: latency-svc-rpbff +Jun 18 11:39:17.124: INFO: Got endpoints: latency-svc-zpqq6 [752.240169ms] +Jun 18 11:39:17.135: INFO: Created: latency-svc-4svvm +Jun 18 11:39:17.170: INFO: Got endpoints: latency-svc-tljgm [747.151276ms] +Jun 18 11:39:17.181: INFO: Created: latency-svc-v8bwj +Jun 18 11:39:17.221: INFO: Got endpoints: latency-svc-q884h [750.726785ms] +Jun 18 11:39:17.233: INFO: Created: latency-svc-5g28t +Jun 18 11:39:17.270: INFO: Got endpoints: latency-svc-hn8fw [748.080629ms] +Jun 18 11:39:17.286: INFO: Created: latency-svc-dwtzc +Jun 18 11:39:17.320: INFO: Got endpoints: latency-svc-mfrv9 [749.631524ms] +Jun 18 11:39:17.334: INFO: Created: latency-svc-wfh7m +Jun 18 11:39:17.371: INFO: Got endpoints: latency-svc-8qfjd [750.525242ms] +Jun 18 11:39:17.382: INFO: Created: latency-svc-xnl6w +Jun 18 11:39:17.421: INFO: Got endpoints: latency-svc-z9lfw [749.505343ms] +Jun 18 11:39:17.432: INFO: Created: latency-svc-9mbxn +Jun 18 11:39:17.470: INFO: Got endpoints: latency-svc-gkvzd [749.791899ms] +Jun 18 11:39:17.481: INFO: Created: latency-svc-sskk8 +Jun 18 11:39:17.527: INFO: Got endpoints: latency-svc-6b6h9 [756.986266ms] +Jun 18 11:39:17.540: INFO: Created: latency-svc-kk8bl +Jun 18 11:39:17.571: INFO: Got endpoints: latency-svc-qsgwp [750.442049ms] +Jun 18 11:39:17.583: INFO: Created: latency-svc-4kn4b +Jun 18 11:39:17.621: INFO: Got endpoints: latency-svc-5j275 [750.957486ms] +Jun 18 11:39:17.638: INFO: Created: latency-svc-rf29c +Jun 18 11:39:17.670: INFO: Got endpoints: latency-svc-wn52s [749.618473ms] +Jun 18 11:39:17.696: INFO: Created: latency-svc-dlv9t +Jun 18 11:39:17.720: INFO: Got endpoints: latency-svc-zzndh [749.738548ms] +Jun 18 11:39:17.732: INFO: Created: latency-svc-tv4vg +Jun 18 11:39:17.770: INFO: Got endpoints: latency-svc-h79v5 [749.912109ms] +Jun 18 11:39:17.783: INFO: Created: latency-svc-8rv94 +Jun 18 11:39:17.819: INFO: Got endpoints: latency-svc-rpbff [748.448308ms] +Jun 18 11:39:17.833: INFO: Created: latency-svc-s2n7f +Jun 18 11:39:17.870: INFO: Got endpoints: latency-svc-4svvm [746.382524ms] +Jun 18 11:39:17.881: INFO: Created: latency-svc-g2hjs +Jun 18 11:39:17.920: INFO: Got endpoints: latency-svc-v8bwj [750.265387ms] +Jun 18 11:39:17.931: INFO: Created: latency-svc-dznkj +Jun 18 11:39:17.975: INFO: Got endpoints: latency-svc-5g28t [753.897496ms] +Jun 18 11:39:17.990: INFO: Created: latency-svc-t6n6h +Jun 18 11:39:18.020: INFO: Got endpoints: latency-svc-dwtzc [750.127936ms] +Jun 18 11:39:18.032: INFO: Created: latency-svc-l24bs +Jun 18 11:39:18.070: INFO: Got endpoints: latency-svc-wfh7m [750.503928ms] +Jun 18 11:39:18.081: INFO: Created: latency-svc-tmnd4 +Jun 18 11:39:18.122: INFO: Got endpoints: latency-svc-xnl6w [751.067386ms] +Jun 18 11:39:18.135: INFO: Created: latency-svc-xkfms +Jun 18 11:39:18.170: INFO: Got endpoints: latency-svc-9mbxn [749.377869ms] +Jun 18 11:39:18.184: INFO: Created: latency-svc-xbd5r +Jun 18 11:39:18.220: INFO: Got endpoints: latency-svc-sskk8 [749.922595ms] +Jun 18 11:39:18.233: INFO: Created: latency-svc-rqhrs +Jun 18 11:39:18.270: INFO: Got endpoints: latency-svc-kk8bl [743.097601ms] +Jun 18 11:39:18.282: INFO: Created: latency-svc-8cj45 +Jun 18 11:39:18.320: INFO: Got endpoints: latency-svc-4kn4b [749.022833ms] +Jun 18 11:39:18.333: INFO: Created: latency-svc-lqpdv +Jun 18 11:39:18.370: INFO: Got endpoints: latency-svc-rf29c [749.096434ms] +Jun 18 11:39:18.382: INFO: Created: latency-svc-t58zl +Jun 18 11:39:18.420: INFO: Got endpoints: latency-svc-dlv9t [750.269508ms] +Jun 18 11:39:18.431: INFO: Created: latency-svc-22r5l +Jun 18 11:39:18.470: INFO: Got endpoints: latency-svc-tv4vg [749.941746ms] +Jun 18 11:39:18.482: INFO: Created: latency-svc-hq8ns +Jun 18 11:39:18.520: INFO: Got endpoints: latency-svc-8rv94 [750.130047ms] +Jun 18 11:39:18.532: INFO: Created: latency-svc-hzhbd +Jun 18 11:39:18.570: INFO: Got endpoints: latency-svc-s2n7f [750.832793ms] +Jun 18 11:39:18.582: INFO: Created: latency-svc-hs2l9 +Jun 18 11:39:18.620: INFO: Got endpoints: latency-svc-g2hjs [749.644917ms] +Jun 18 11:39:18.631: INFO: Created: latency-svc-jv8jz +Jun 18 11:39:18.670: INFO: Got endpoints: latency-svc-dznkj [750.045901ms] +Jun 18 11:39:18.682: INFO: Created: latency-svc-qs6dv +Jun 18 11:39:18.721: INFO: Got endpoints: latency-svc-t6n6h [745.939288ms] +Jun 18 11:39:18.732: INFO: Created: latency-svc-9d6g2 +Jun 18 11:39:18.771: INFO: Got endpoints: latency-svc-l24bs [750.33174ms] +Jun 18 11:39:18.782: INFO: Created: latency-svc-dxgxk +Jun 18 11:39:18.822: INFO: Got endpoints: latency-svc-tmnd4 [751.454944ms] +Jun 18 11:39:18.833: INFO: Created: latency-svc-cskgf +Jun 18 11:39:18.871: INFO: Got endpoints: latency-svc-xkfms [748.901199ms] +Jun 18 11:39:18.883: INFO: Created: latency-svc-tv57b +Jun 18 11:39:18.920: INFO: Got endpoints: latency-svc-xbd5r [750.145ms] +Jun 18 11:39:18.932: INFO: Created: latency-svc-bggmt +Jun 18 11:39:18.970: INFO: Got endpoints: latency-svc-rqhrs [750.228102ms] +Jun 18 11:39:18.982: INFO: Created: latency-svc-gg8x9 +Jun 18 11:39:19.020: INFO: Got endpoints: latency-svc-8cj45 [749.739167ms] +Jun 18 11:39:19.031: INFO: Created: latency-svc-55bqn +Jun 18 11:39:19.073: INFO: Got endpoints: latency-svc-lqpdv [752.732578ms] +Jun 18 11:39:19.084: INFO: Created: latency-svc-znmgg +Jun 18 11:39:19.120: INFO: Got endpoints: latency-svc-t58zl [749.855771ms] +Jun 18 11:39:19.132: INFO: Created: latency-svc-mgbdf +Jun 18 11:39:19.170: INFO: Got endpoints: latency-svc-22r5l [749.375133ms] +Jun 18 11:39:19.181: INFO: Created: latency-svc-mw8bn +Jun 18 11:39:19.221: INFO: Got endpoints: latency-svc-hq8ns [750.478439ms] +Jun 18 11:39:19.233: INFO: Created: latency-svc-2j5nw +Jun 18 11:39:19.270: INFO: Got endpoints: latency-svc-hzhbd [749.577577ms] +Jun 18 11:39:19.280: INFO: Created: latency-svc-qkmr6 +Jun 18 11:39:19.320: INFO: Got endpoints: latency-svc-hs2l9 [750.000896ms] +Jun 18 11:39:19.333: INFO: Created: latency-svc-dqdfp +Jun 18 11:39:19.371: INFO: Got endpoints: latency-svc-jv8jz [750.912196ms] +Jun 18 11:39:19.382: INFO: Created: latency-svc-djh8x +Jun 18 11:39:19.420: INFO: Got endpoints: latency-svc-qs6dv [749.728267ms] +Jun 18 11:39:19.436: INFO: Created: latency-svc-hzmj5 +Jun 18 11:39:19.470: INFO: Got endpoints: latency-svc-9d6g2 [749.09415ms] +Jun 18 11:39:19.481: INFO: Created: latency-svc-j9j87 +Jun 18 11:39:19.521: INFO: Got endpoints: latency-svc-dxgxk [749.760443ms] +Jun 18 11:39:19.531: INFO: Created: latency-svc-tsbks +Jun 18 11:39:19.570: INFO: Got endpoints: latency-svc-cskgf [748.33766ms] +Jun 18 11:39:19.582: INFO: Created: latency-svc-m77jr +Jun 18 11:39:19.620: INFO: Got endpoints: latency-svc-tv57b [748.82781ms] +Jun 18 11:39:19.631: INFO: Created: latency-svc-f5z7k +Jun 18 11:39:19.671: INFO: Got endpoints: latency-svc-bggmt [750.569621ms] +Jun 18 11:39:19.682: INFO: Created: latency-svc-gtw8z +Jun 18 11:39:19.720: INFO: Got endpoints: latency-svc-gg8x9 [749.489488ms] +Jun 18 11:39:19.731: INFO: Created: latency-svc-gxzl5 +Jun 18 11:39:19.770: INFO: Got endpoints: latency-svc-55bqn [750.218795ms] +Jun 18 11:39:19.782: INFO: Created: latency-svc-r8kjl +Jun 18 11:39:19.820: INFO: Got endpoints: latency-svc-znmgg [747.099881ms] +Jun 18 11:39:19.831: INFO: Created: latency-svc-llhtc +Jun 18 11:39:19.870: INFO: Got endpoints: latency-svc-mgbdf [750.365272ms] +Jun 18 11:39:19.881: INFO: Created: latency-svc-p97wr +Jun 18 11:39:19.920: INFO: Got endpoints: latency-svc-mw8bn [750.581164ms] +Jun 18 11:39:19.933: INFO: Created: latency-svc-hwhp6 +Jun 18 11:39:19.970: INFO: Got endpoints: latency-svc-2j5nw [749.367067ms] +Jun 18 11:39:19.983: INFO: Created: latency-svc-jfntr +Jun 18 11:39:20.020: INFO: Got endpoints: latency-svc-qkmr6 [749.98122ms] +Jun 18 11:39:20.032: INFO: Created: latency-svc-t8pks +Jun 18 11:39:20.070: INFO: Got endpoints: latency-svc-dqdfp [749.437689ms] +Jun 18 11:39:20.081: INFO: Created: latency-svc-nzfxf +Jun 18 11:39:20.121: INFO: Got endpoints: latency-svc-djh8x [749.732724ms] +Jun 18 11:39:20.132: INFO: Created: latency-svc-n9gn7 +Jun 18 11:39:20.174: INFO: Got endpoints: latency-svc-hzmj5 [753.773923ms] +Jun 18 11:39:20.185: INFO: Created: latency-svc-hxslb +Jun 18 11:39:20.221: INFO: Got endpoints: latency-svc-j9j87 [750.612061ms] +Jun 18 11:39:20.232: INFO: Created: latency-svc-kzh58 +Jun 18 11:39:20.270: INFO: Got endpoints: latency-svc-tsbks [749.415297ms] +Jun 18 11:39:20.281: INFO: Created: latency-svc-j7mmq +Jun 18 11:39:20.320: INFO: Got endpoints: latency-svc-m77jr [750.271775ms] +Jun 18 11:39:20.332: INFO: Created: latency-svc-qpl6h +Jun 18 11:39:20.371: INFO: Got endpoints: latency-svc-f5z7k [750.765579ms] +Jun 18 11:39:20.382: INFO: Created: latency-svc-97s57 +Jun 18 11:39:20.420: INFO: Got endpoints: latency-svc-gtw8z [748.861229ms] +Jun 18 11:39:20.432: INFO: Created: latency-svc-svhnq +Jun 18 11:39:20.470: INFO: Got endpoints: latency-svc-gxzl5 [750.216852ms] +Jun 18 11:39:20.482: INFO: Created: latency-svc-g7dd2 +Jun 18 11:39:20.521: INFO: Got endpoints: latency-svc-r8kjl [750.311994ms] +Jun 18 11:39:20.532: INFO: Created: latency-svc-h6cm7 +Jun 18 11:39:20.571: INFO: Got endpoints: latency-svc-llhtc [750.663254ms] +Jun 18 11:39:20.583: INFO: Created: latency-svc-2jkz7 +Jun 18 11:39:20.620: INFO: Got endpoints: latency-svc-p97wr [750.092059ms] +Jun 18 11:39:20.632: INFO: Created: latency-svc-8clvx +Jun 18 11:39:20.671: INFO: Got endpoints: latency-svc-hwhp6 [750.875659ms] +Jun 18 11:39:20.683: INFO: Created: latency-svc-rxz5b +Jun 18 11:39:20.721: INFO: Got endpoints: latency-svc-jfntr [750.647256ms] +Jun 18 11:39:20.732: INFO: Created: latency-svc-jsh6d +Jun 18 11:39:20.772: INFO: Got endpoints: latency-svc-t8pks [752.095634ms] +Jun 18 11:39:20.783: INFO: Created: latency-svc-lhzqs +Jun 18 11:39:20.821: INFO: Got endpoints: latency-svc-nzfxf [750.828467ms] +Jun 18 11:39:20.832: INFO: Created: latency-svc-xjfh2 +Jun 18 11:39:20.870: INFO: Got endpoints: latency-svc-n9gn7 [749.337527ms] +Jun 18 11:39:20.883: INFO: Created: latency-svc-zz6wp +Jun 18 11:39:20.921: INFO: Got endpoints: latency-svc-hxslb [747.224172ms] +Jun 18 11:39:20.933: INFO: Created: latency-svc-hmrwv +Jun 18 11:39:20.980: INFO: Got endpoints: latency-svc-kzh58 [759.607897ms] +Jun 18 11:39:20.992: INFO: Created: latency-svc-jbb4g +Jun 18 11:39:21.020: INFO: Got endpoints: latency-svc-j7mmq [750.173665ms] +Jun 18 11:39:21.032: INFO: Created: latency-svc-kcp8t +Jun 18 11:39:21.071: INFO: Got endpoints: latency-svc-qpl6h [750.068909ms] +Jun 18 11:39:21.084: INFO: Created: latency-svc-n557z +Jun 18 11:39:21.124: INFO: Got endpoints: latency-svc-97s57 [752.858785ms] +Jun 18 11:39:21.158: INFO: Created: latency-svc-mfdn4 +Jun 18 11:39:21.170: INFO: Got endpoints: latency-svc-svhnq [750.564548ms] +Jun 18 11:39:21.182: INFO: Created: latency-svc-ldr2v +Jun 18 11:39:21.220: INFO: Got endpoints: latency-svc-g7dd2 [750.100179ms] +Jun 18 11:39:21.233: INFO: Created: latency-svc-bxxqx +Jun 18 11:39:21.271: INFO: Got endpoints: latency-svc-h6cm7 [750.31208ms] +Jun 18 11:39:21.282: INFO: Created: latency-svc-v8cjf +Jun 18 11:39:21.322: INFO: Got endpoints: latency-svc-2jkz7 [751.062594ms] +Jun 18 11:39:21.334: INFO: Created: latency-svc-r66kj +Jun 18 11:39:21.370: INFO: Got endpoints: latency-svc-8clvx [749.616696ms] +Jun 18 11:39:21.382: INFO: Created: latency-svc-hjr7q +Jun 18 11:39:21.422: INFO: Got endpoints: latency-svc-rxz5b [750.452038ms] +Jun 18 11:39:21.433: INFO: Created: latency-svc-vxcv7 +Jun 18 11:39:21.470: INFO: Got endpoints: latency-svc-jsh6d [749.19897ms] +Jun 18 11:39:21.483: INFO: Created: latency-svc-cgdjl +Jun 18 11:39:21.520: INFO: Got endpoints: latency-svc-lhzqs [747.852674ms] +Jun 18 11:39:21.571: INFO: Got endpoints: latency-svc-xjfh2 [749.840585ms] +Jun 18 11:39:21.620: INFO: Got endpoints: latency-svc-zz6wp [750.151138ms] +Jun 18 11:39:21.671: INFO: Got endpoints: latency-svc-hmrwv [749.86209ms] +Jun 18 11:39:21.720: INFO: Got endpoints: latency-svc-jbb4g [739.288746ms] +Jun 18 11:39:21.770: INFO: Got endpoints: latency-svc-kcp8t [750.133917ms] +Jun 18 11:39:21.821: INFO: Got endpoints: latency-svc-n557z [749.927914ms] +Jun 18 11:39:21.877: INFO: Got endpoints: latency-svc-mfdn4 [753.58312ms] +Jun 18 11:39:21.920: INFO: Got endpoints: latency-svc-ldr2v [749.761395ms] +Jun 18 11:39:21.970: INFO: Got endpoints: latency-svc-bxxqx [749.318956ms] +Jun 18 11:39:22.020: INFO: Got endpoints: latency-svc-v8cjf [748.855833ms] +Jun 18 11:39:22.070: INFO: Got endpoints: latency-svc-r66kj [748.067589ms] +Jun 18 11:39:22.120: INFO: Got endpoints: latency-svc-hjr7q [749.99761ms] +Jun 18 11:39:22.171: INFO: Got endpoints: latency-svc-vxcv7 [749.660246ms] +Jun 18 11:39:22.222: INFO: Got endpoints: latency-svc-cgdjl [751.371739ms] +Jun 18 11:39:22.222: INFO: Latencies: [20.153492ms 31.693499ms 40.551205ms 47.743603ms 57.104075ms 65.871808ms 80.250755ms 81.926439ms 91.495549ms 96.128167ms 104.559556ms 114.544703ms 116.902633ms 122.199254ms 124.152089ms 124.519238ms 124.519319ms 125.097189ms 125.642382ms 125.734734ms 127.0236ms 127.976608ms 128.87845ms 133.5875ms 133.800867ms 133.919699ms 134.603411ms 136.643247ms 141.307235ms 208.514592ms 210.312256ms 210.48311ms 210.815831ms 211.141468ms 211.178623ms 211.316075ms 212.014585ms 243.850053ms 283.95975ms 319.180469ms 361.248325ms 402.928562ms 443.465457ms 452.349486ms 479.829646ms 494.172974ms 531.870947ms 573.349792ms 615.593247ms 656.942142ms 699.38579ms 739.288746ms 740.674709ms 740.783781ms 743.097601ms 745.939288ms 745.977828ms 746.382524ms 747.099881ms 747.151276ms 747.224172ms 747.852674ms 748.067589ms 748.080629ms 748.164662ms 748.23139ms 748.33766ms 748.448308ms 748.517726ms 748.595259ms 748.716382ms 748.82781ms 748.855833ms 748.861229ms 748.901199ms 749.022833ms 749.09415ms 749.096434ms 749.174367ms 749.19897ms 749.248625ms 749.318956ms 749.337527ms 749.367067ms 749.375133ms 749.377869ms 749.393503ms 749.403413ms 749.409935ms 749.415297ms 749.437689ms 749.458664ms 749.465911ms 749.489488ms 749.498747ms 749.505343ms 749.528705ms 749.556466ms 749.571272ms 749.577577ms 749.616696ms 749.618473ms 749.631524ms 749.644917ms 749.660246ms 749.728267ms 749.732724ms 749.738548ms 749.739167ms 749.754437ms 749.760443ms 749.761395ms 749.791899ms 749.82352ms 749.840585ms 749.855771ms 749.86209ms 749.873224ms 749.892941ms 749.912109ms 749.922595ms 749.927914ms 749.936358ms 749.941746ms 749.957871ms 749.98122ms 749.99761ms 750.000896ms 750.041312ms 750.045901ms 750.068909ms 750.092059ms 750.100179ms 750.127936ms 750.130047ms 750.133917ms 750.145ms 750.151138ms 750.173665ms 750.216852ms 750.218795ms 750.228102ms 750.237924ms 750.251511ms 750.265387ms 750.269272ms 750.269508ms 750.271775ms 750.311994ms 750.31208ms 750.33174ms 750.365272ms 750.378681ms 750.380087ms 750.386184ms 750.442049ms 750.452038ms 750.460726ms 750.464628ms 750.478439ms 750.503928ms 750.514277ms 750.525242ms 750.564548ms 750.569621ms 750.581164ms 750.595385ms 750.607985ms 750.612061ms 750.647256ms 750.663254ms 750.666529ms 750.693945ms 750.726785ms 750.765579ms 750.828467ms 750.832793ms 750.875659ms 750.912196ms 750.957486ms 751.062594ms 751.067386ms 751.171266ms 751.288535ms 751.371739ms 751.454944ms 751.623729ms 751.81716ms 752.095634ms 752.240169ms 752.713478ms 752.732578ms 752.858785ms 753.58312ms 753.773923ms 753.897496ms 755.044415ms 756.986266ms 758.988175ms 759.607897ms] +Jun 18 11:39:22.222: INFO: 50 %ile: 749.616696ms +Jun 18 11:39:22.222: INFO: 90 %ile: 751.062594ms +Jun 18 11:39:22.222: INFO: 99 %ile: 758.988175ms +Jun 18 11:39:22.222: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:39:22.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-2247" for this suite. +Jun 18 11:39:36.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:39:36.356: INFO: namespace svc-latency-2247 deletion completed in 14.129931319s + +• [SLOW TEST:24.898 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:39:36.356: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override command +Jun 18 11:39:36.404: INFO: Waiting up to 5m0s for pod "client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7" in namespace "containers-2772" to be "success or failure" +Jun 18 11:39:36.407: INFO: Pod "client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964905ms +Jun 18 11:39:38.411: INFO: Pod "client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006976548s +STEP: Saw pod success +Jun 18 11:39:38.411: INFO: Pod "client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:39:38.414: INFO: Trying to get logs from node ip-172-26-16-178 pod client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:39:38.435: INFO: Waiting for pod client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:39:38.437: INFO: Pod client-containers-bfc862f4-91bd-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:39:38.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2772" for this suite. +Jun 18 11:39:44.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:39:44.560: INFO: namespace containers-2772 deletion completed in 6.119390931s + +• [SLOW TEST:8.204 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:39:44.560: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Starting the proxy +Jun 18 11:39:44.596: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-675335780 proxy --unix-socket=/tmp/kubectl-proxy-unix476695155/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:39:44.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2816" for this suite. +Jun 18 11:39:50.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:39:50.776: INFO: namespace kubectl-2816 deletion completed in 6.12491577s + +• [SLOW TEST:6.216 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Proxy server + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:39:50.777: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating Redis RC +Jun 18 11:39:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 create -f - --namespace=kubectl-4811' +Jun 18 11:39:51.008: INFO: stderr: "" +Jun 18 11:39:51.008: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 18 11:39:52.012: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:39:52.012: INFO: Found 0 / 1 +Jun 18 11:39:53.012: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:39:53.012: INFO: Found 1 / 1 +Jun 18 11:39:53.012: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Jun 18 11:39:53.015: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:39:53.015: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 18 11:39:53.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 patch pod redis-master-wcq2b --namespace=kubectl-4811 -p {"metadata":{"annotations":{"x":"y"}}}' +Jun 18 11:39:53.083: INFO: stderr: "" +Jun 18 11:39:53.083: INFO: stdout: "pod/redis-master-wcq2b patched\n" +STEP: checking annotations +Jun 18 11:39:53.087: INFO: Selector matched 1 pods for map[app:redis] +Jun 18 11:39:53.087: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:39:53.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4811" for this suite. +Jun 18 11:40:15.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:40:15.218: INFO: namespace kubectl-4811 deletion completed in 22.127173319s + +• [SLOW TEST:24.441 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl patch + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:40:15.218: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-3330 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 18 11:40:15.270: INFO: Found 0 stateful pods, waiting for 3 +Jun 18 11:40:25.276: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 11:40:25.276: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 11:40:25.276: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 11:40:25.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-3330 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:40:25.504: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:40:25.504: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:40:25.504: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 18 11:40:35.540: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Jun 18 11:40:45.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-3330 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:40:45.769: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:40:45.769: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:40:45.769: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +STEP: Rolling back to a previous revision +Jun 18 11:41:05.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-3330 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:41:05.997: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:41:05.997: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:41:05.997: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 11:41:16.030: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Jun 18 11:41:26.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-3330 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:41:26.249: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:41:26.249: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:41:26.249: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 11:41:46.269: INFO: Waiting for StatefulSet statefulset-3330/ss2 to complete update +Jun 18 11:41:46.269: INFO: Waiting for Pod statefulset-3330/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 18 11:41:56.277: INFO: Deleting all statefulset in ns statefulset-3330 +Jun 18 11:41:56.280: INFO: Scaling statefulset ss2 to 0 +Jun 18 11:42:26.295: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 11:42:26.299: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:42:26.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3330" for this suite. +Jun 18 11:42:32.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:42:32.444: INFO: namespace statefulset-3330 deletion completed in 6.124147928s + +• [SLOW TEST:137.226 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:42:32.444: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 18 11:42:32.493: INFO: Waiting up to 5m0s for pod "pod-28bd8868-91be-11e9-8aef-6ab77b36fff7" in namespace "emptydir-2900" to be "success or failure" +Jun 18 11:42:32.496: INFO: Pod "pod-28bd8868-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913727ms +Jun 18 11:42:34.499: INFO: Pod "pod-28bd8868-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006792131s +STEP: Saw pod success +Jun 18 11:42:34.499: INFO: Pod "pod-28bd8868-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:42:34.502: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-28bd8868-91be-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:42:34.522: INFO: Waiting for pod pod-28bd8868-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:42:34.524: INFO: Pod pod-28bd8868-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:42:34.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2900" for this suite. +Jun 18 11:42:40.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:42:40.661: INFO: namespace emptydir-2900 deletion completed in 6.133093895s + +• [SLOW TEST:8.217 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:42:40.661: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 18 11:42:40.698: INFO: PodSpec: initContainers in spec.initContainers +Jun 18 11:43:25.106: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2da31d59-91be-11e9-8aef-6ab77b36fff7", GenerateName:"", Namespace:"init-container-6540", SelfLink:"/api/v1/namespaces/init-container-6540/pods/pod-init-2da31d59-91be-11e9-8aef-6ab77b36fff7", UID:"2da3bd9e-91be-11e9-8d87-0a902858a792", ResourceVersion:"45295", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696454960, loc:(*time.Location)(0x8a1a0e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"698920001"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"10.42.0.218/32"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qd98r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c60000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qd98r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qd98r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qd98r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026a9d68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-26-16-178", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024d6120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026a9f10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026a9f30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026a9f38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026a9f3c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696454960, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696454960, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696454960, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696454960, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.26.16.178", PodIP:"10.42.0.218", StartTime:(*v1.Time)(0xc0012300a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c4d030)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c4d0a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1eb46567e3537fad9e0909ec5aaedfb39d38061ff2e68797cf18f280ff260d5c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001230100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012300e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:43:25.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6540" for this suite. +Jun 18 11:43:47.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:43:47.240: INFO: namespace init-container-6540 deletion completed in 22.129061445s + +• [SLOW TEST:66.579 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:43:47.240: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:44:09.299: INFO: Container started at 2019-06-18 11:43:48 +0000 UTC, pod became ready at 2019-06-18 11:44:08 +0000 UTC +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:44:09.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3919" for this suite. +Jun 18 11:44:31.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:44:31.426: INFO: namespace container-probe-3919 deletion completed in 22.122507098s + +• [SLOW TEST:44.186 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:44:31.426: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-6fa804a7-91be-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:44:31.474: INFO: Waiting up to 5m0s for pod "pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7" in namespace "configmap-9699" to be "success or failure" +Jun 18 11:44:31.481: INFO: Pod "pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324196ms +Jun 18 11:44:33.485: INFO: Pod "pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010400328s +STEP: Saw pod success +Jun 18 11:44:33.485: INFO: Pod "pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:44:33.488: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7 container configmap-volume-test: +STEP: delete the pod +Jun 18 11:44:33.510: INFO: Waiting for pod pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:44:33.513: INFO: Pod pod-configmaps-6fa902d5-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:44:33.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9699" for this suite. +Jun 18 11:44:39.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:44:39.637: INFO: namespace configmap-9699 deletion completed in 6.120554482s + +• [SLOW TEST:8.211 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:44:39.638: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jun 18 11:44:39.743: INFO: Waiting up to 5m0s for pod "pod-74964cc3-91be-11e9-8aef-6ab77b36fff7" in namespace "emptydir-4196" to be "success or failure" +Jun 18 11:44:39.746: INFO: Pod "pod-74964cc3-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.198321ms +Jun 18 11:44:41.751: INFO: Pod "pod-74964cc3-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007545508s +Jun 18 11:44:43.755: INFO: Pod "pod-74964cc3-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011664945s +STEP: Saw pod success +Jun 18 11:44:43.755: INFO: Pod "pod-74964cc3-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:44:43.758: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-74964cc3-91be-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:44:43.778: INFO: Waiting for pod pod-74964cc3-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:44:43.782: INFO: Pod pod-74964cc3-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:44:43.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4196" for this suite. +Jun 18 11:44:49.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:44:49.909: INFO: namespace emptydir-4196 deletion completed in 6.122653951s + +• [SLOW TEST:10.272 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:44:49.910: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Jun 18 11:44:55.983: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:55.983: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.123: INFO: Exec stderr: "" +Jun 18 11:44:56.123: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.123: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.290: INFO: Exec stderr: "" +Jun 18 11:44:56.291: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.291: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.442: INFO: Exec stderr: "" +Jun 18 11:44:56.442: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.442: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.591: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Jun 18 11:44:56.591: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.591: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.733: INFO: Exec stderr: "" +Jun 18 11:44:56.733: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.733: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:56.907: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Jun 18 11:44:56.907: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:56.907: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:57.053: INFO: Exec stderr: "" +Jun 18 11:44:57.053: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:57.053: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:57.203: INFO: Exec stderr: "" +Jun 18 11:44:57.203: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:57.203: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:57.345: INFO: Exec stderr: "" +Jun 18 11:44:57.345: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5671 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 18 11:44:57.345: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +Jun 18 11:44:57.493: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:44:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-5671" for this suite. +Jun 18 11:45:39.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:45:39.621: INFO: namespace e2e-kubelet-etc-hosts-5671 deletion completed in 42.123260989s + +• [SLOW TEST:49.711 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:45:39.621: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +W0618 11:46:19.686706 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 11:46:19.686: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:46:19.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1789" for this suite. +Jun 18 11:46:25.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:46:25.809: INFO: namespace gc-1789 deletion completed in 6.119870929s + +• [SLOW TEST:46.189 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:46:25.810: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jun 18 11:46:25.858: INFO: Waiting up to 5m0s for pod "pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7" in namespace "emptydir-5410" to be "success or failure" +Jun 18 11:46:25.862: INFO: Pod "pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794076ms +Jun 18 11:46:27.878: INFO: Pod "pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019603189s +STEP: Saw pod success +Jun 18 11:46:27.878: INFO: Pod "pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:46:27.881: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:46:27.899: INFO: Waiting for pod pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:46:27.906: INFO: Pod pod-b3d60e53-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:46:27.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5410" for this suite. +Jun 18 11:46:33.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:46:34.035: INFO: namespace emptydir-5410 deletion completed in 6.12601314s + +• [SLOW TEST:8.225 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:46:34.035: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-b8bd3db0-91be-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:46:34.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7" in namespace "projected-5872" to be "success or failure" +Jun 18 11:46:34.095: INFO: Pod "pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.540288ms +Jun 18 11:46:36.099: INFO: Pod "pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00969648s +STEP: Saw pod success +Jun 18 11:46:36.099: INFO: Pod "pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:46:36.102: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7 container projected-configmap-volume-test: +STEP: delete the pod +Jun 18 11:46:36.122: INFO: Waiting for pod pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:46:36.125: INFO: Pod pod-projected-configmaps-b8be5168-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:46:36.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5872" for this suite. +Jun 18 11:46:42.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:46:42.258: INFO: namespace projected-5872 deletion completed in 6.129237539s + +• [SLOW TEST:8.222 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:46:42.258: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-bda52651-91be-11e9-8aef-6ab77b36fff7 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:46:46.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6587" for this suite. +Jun 18 11:47:08.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:08.496: INFO: namespace configmap-6587 deletion completed in 22.137080569s + +• [SLOW TEST:26.238 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:08.496: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W0618 11:47:14.567371 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 11:47:14.567: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:14.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6628" for this suite. +Jun 18 11:47:20.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:20.692: INFO: namespace gc-6628 deletion completed in 6.122040859s + +• [SLOW TEST:12.197 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:20.693: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 18 11:47:20.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 18 11:47:20.735: INFO: Waiting for terminating namespaces to be deleted... +Jun 18 11:47:20.738: INFO: +Logging pods the kubelet thinks is on node ip-172-26-16-178 before test +Jun 18 11:47:20.744: INFO: rke-ingress-controller-deploy-job-697mh from kube-system started at 2019-06-18 08:30:32 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container rke-ingress-controller-pod ready: false, restart count 0 +Jun 18 11:47:20.744: INFO: cattle-node-agent-pk4wc from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container agent ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: sonobuoy-e2e-job-10fdfd8dfec5439f from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container e2e ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: rke-coredns-addon-deploy-job-4b9ct from kube-system started at 2019-06-18 08:30:22 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container rke-coredns-addon-pod ready: false, restart count 0 +Jun 18 11:47:20.744: INFO: rke-metrics-addon-deploy-job-f4q28 from kube-system started at 2019-06-18 08:30:27 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container rke-metrics-addon-pod ready: false, restart count 0 +Jun 18 11:47:20.744: INFO: coredns-autoscaler-5d5d49b8ff-7v6zn from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container autoscaler ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: rke-network-plugin-deploy-job-c76n7 from kube-system started at 2019-06-18 08:30:17 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container rke-network-plugin-pod ready: false, restart count 0 +Jun 18 11:47:20.744: INFO: canal-kwvpm from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: coredns-86bc4b7c96-vms9l from kube-system started at 2019-06-18 08:30:28 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container coredns ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: nginx-ingress-controller-x7drh from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: kube-api-auth-9nzcl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 11:47:20.744: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-xvczp from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.744: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 18 11:47:20.744: INFO: Container systemd-logs ready: true, restart count 1 +Jun 18 11:47:20.744: INFO: +Logging pods the kubelet thinks is on node ip-172-26-17-1 before test +Jun 18 11:47:20.750: INFO: nginx-ingress-controller-98bp5 from ingress-nginx started at 2019-06-18 08:30:37 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-18 10:27:17 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: canal-9q452 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: kube-api-auth-6mld7 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: cattle-cluster-agent-6b589fd864-hhp9v from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container cluster-register ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: cattle-node-agent-8k6f2 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container agent ready: true, restart count 0 +Jun 18 11:47:20.750: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-j5st8 from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.750: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 18 11:47:20.750: INFO: Container systemd-logs ready: true, restart count 1 +Jun 18 11:47:20.750: INFO: +Logging pods the kubelet thinks is on node ip-172-26-30-38 before test +Jun 18 11:47:20.755: INFO: default-http-backend-5954bd5d8c-t6btz from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container default-http-backend ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: cattle-node-agent-pk6fl from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container agent ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: canal-wnpt7 from kube-system started at 2019-06-18 08:30:20 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container calico-node ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: Container kube-flannel ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: metrics-server-7f6bd4c888-n4w2m from kube-system started at 2019-06-18 08:30:29 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container metrics-server ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: nginx-ingress-controller-nqmmq from ingress-nginx started at 2019-06-18 08:30:34 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container nginx-ingress-controller ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: kube-api-auth-j5sk5 from cattle-system started at 2019-06-18 08:30:47 +0000 UTC (1 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container kube-api-auth ready: true, restart count 0 +Jun 18 11:47:20.755: INFO: sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-mmnqj from heptio-sonobuoy started at 2019-06-18 10:27:18 +0000 UTC (2 container statuses recorded) +Jun 18 11:47:20.755: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 18 11:47:20.755: INFO: Container systemd-logs ready: true, restart count 1 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: verifying the node has the label node ip-172-26-16-178 +STEP: verifying the node has the label node ip-172-26-17-1 +STEP: verifying the node has the label node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod cattle-cluster-agent-6b589fd864-hhp9v requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod cattle-node-agent-8k6f2 requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod cattle-node-agent-pk4wc requesting resource cpu=0m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod cattle-node-agent-pk6fl requesting resource cpu=0m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod kube-api-auth-6mld7 requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod kube-api-auth-9nzcl requesting resource cpu=0m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod kube-api-auth-j5sk5 requesting resource cpu=0m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod sonobuoy requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod sonobuoy-e2e-job-10fdfd8dfec5439f requesting resource cpu=0m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-j5st8 requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-mmnqj requesting resource cpu=0m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod sonobuoy-systemd-logs-daemon-set-29df6a374df24ffa-xvczp requesting resource cpu=0m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod default-http-backend-5954bd5d8c-t6btz requesting resource cpu=10m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod nginx-ingress-controller-98bp5 requesting resource cpu=0m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod nginx-ingress-controller-nqmmq requesting resource cpu=0m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod nginx-ingress-controller-x7drh requesting resource cpu=0m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod canal-9q452 requesting resource cpu=250m on Node ip-172-26-17-1 +Jun 18 11:47:20.801: INFO: Pod canal-kwvpm requesting resource cpu=250m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod canal-wnpt7 requesting resource cpu=250m on Node ip-172-26-30-38 +Jun 18 11:47:20.801: INFO: Pod coredns-86bc4b7c96-vms9l requesting resource cpu=100m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod coredns-autoscaler-5d5d49b8ff-7v6zn requesting resource cpu=20m on Node ip-172-26-16-178 +Jun 18 11:47:20.801: INFO: Pod metrics-server-7f6bd4c888-n4w2m requesting resource cpu=0m on Node ip-172-26-30-38 +STEP: Starting Pods to consume most of the cluster CPU. +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7.15a9487b6d6ee2bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3817/filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7 to ip-172-26-16-178] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7.15a9487b9dc166cf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7.15a9487ba2972608], Reason = [Created], Message = [Created container filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7.15a9487bb0298528], Reason = [Started], Message = [Started container filler-pod-d4975bba-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7.15a9487b6de5ba7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3817/filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7 to ip-172-26-17-1] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7.15a9487b9fb6c4a1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7.15a9487ba4cdd41d], Reason = [Created], Message = [Created container filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7.15a9487bb24a2407], Reason = [Started], Message = [Started container filler-pod-d498cdd4-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7.15a9487b6e4b49ca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3817/filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7 to ip-172-26-30-38] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7.15a9487ba023285e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7.15a9487ba4542ec1], Reason = [Created], Message = [Created container filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7.15a9487bb2008752], Reason = [Started], Message = [Started container filler-pod-d499ea6d-91be-11e9-8aef-6ab77b36fff7] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.15a9487be6bd96b4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu.] +STEP: removing the label node off the node ip-172-26-16-178 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node ip-172-26-17-1 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node ip-172-26-30-38 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:23.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3817" for this suite. +Jun 18 11:47:29.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:30.035: INFO: namespace sched-pred-3817 deletion completed in 6.130741789s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:9.343 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:30.036: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-839/configmap-test-da1dfcde-91be-11e9-8aef-6ab77b36fff7 +STEP: Creating a pod to test consume configMaps +Jun 18 11:47:30.086: INFO: Waiting up to 5m0s for pod "pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7" in namespace "configmap-839" to be "success or failure" +Jun 18 11:47:30.090: INFO: Pod "pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.492704ms +Jun 18 11:47:32.094: INFO: Pod "pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00760812s +STEP: Saw pod success +Jun 18 11:47:32.094: INFO: Pod "pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:47:32.097: INFO: Trying to get logs from node ip-172-26-17-1 pod pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7 container env-test: +STEP: delete the pod +Jun 18 11:47:32.117: INFO: Waiting for pod pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:47:32.120: INFO: Pod pod-configmaps-da1f0a7c-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:32.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-839" for this suite. +Jun 18 11:47:38.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:38.247: INFO: namespace configmap-839 deletion completed in 6.122993926s + +• [SLOW TEST:8.211 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:38.247: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 18 11:47:38.292: INFO: Waiting up to 5m0s for pod "downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7" in namespace "downward-api-6483" to be "success or failure" +Jun 18 11:47:38.297: INFO: Pod "downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754911ms +Jun 18 11:47:40.301: INFO: Pod "downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008922814s +STEP: Saw pod success +Jun 18 11:47:40.301: INFO: Pod "downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:47:40.304: INFO: Trying to get logs from node ip-172-26-30-38 pod downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 11:47:40.321: INFO: Waiting for pod downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:47:40.325: INFO: Pod downward-api-df02d26b-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:40.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6483" for this suite. +Jun 18 11:47:46.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:46.451: INFO: namespace downward-api-6483 deletion completed in 6.122283112s + +• [SLOW TEST:8.205 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:46.452: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on tmpfs +Jun 18 11:47:46.496: INFO: Waiting up to 5m0s for pod "pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7" in namespace "emptydir-8137" to be "success or failure" +Jun 18 11:47:46.501: INFO: Pod "pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.163417ms +Jun 18 11:47:48.505: INFO: Pod "pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009256491s +STEP: Saw pod success +Jun 18 11:47:48.505: INFO: Pod "pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:47:48.508: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:47:48.527: INFO: Waiting for pod pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:47:48.530: INFO: Pod pod-e3e6a68a-91be-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:48.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8137" for this suite. +Jun 18 11:47:54.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:47:54.659: INFO: namespace emptydir-8137 deletion completed in 6.124833354s + +• [SLOW TEST:8.207 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:47:54.659: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5139.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5139.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5139.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5139.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 18 11:47:56.746: INFO: DNS probes using dns-5139/dns-test-e8cb61d8-91be-11e9-8aef-6ab77b36fff7 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:47:56.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5139" for this suite. +Jun 18 11:48:02.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:48:02.892: INFO: namespace dns-5139 deletion completed in 6.123502195s + +• [SLOW TEST:8.233 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:48:02.892: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W0618 11:48:13.000332 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 11:48:13.000: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:48:13.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1998" for this suite. +Jun 18 11:48:19.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:48:19.128: INFO: namespace gc-1998 deletion completed in 6.124392877s + +• [SLOW TEST:16.235 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class + should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:48:19.128: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:177 +[It] should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:48:19.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-875" for this suite. +Jun 18 11:48:41.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:48:41.308: INFO: namespace pods-875 deletion completed in 22.127726931s + +• [SLOW TEST:22.180 seconds] +[k8s.io] [sig-node] Pods Extended +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:48:41.308: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 rs, got 1 rs +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +W0618 11:48:42.397646 14 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 18 11:48:42.397: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:48:42.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-478" for this suite. +Jun 18 11:48:48.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:48:48.526: INFO: namespace gc-478 deletion completed in 6.125379613s + +• [SLOW TEST:7.218 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should delete RS created by deployment when not orphaning [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:48:48.526: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 18 11:48:48.559: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:48:51.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9741" for this suite. +Jun 18 11:48:57.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:48:57.635: INFO: namespace init-container-9741 deletion completed in 6.123658955s + +• [SLOW TEST:9.109 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:48:57.635: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8040.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8040.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 18 11:49:01.723: INFO: DNS probes using dns-8040/dns-test-0e544b8a-91bf-11e9-8aef-6ab77b36fff7 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:49:01.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8040" for this suite. +Jun 18 11:49:07.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:49:07.865: INFO: namespace dns-8040 deletion completed in 6.126650958s + +• [SLOW TEST:10.231 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:49:07.866: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-7975 +Jun 18 11:49:09.923: INFO: Started pod liveness-exec in namespace container-probe-7975 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 18 11:49:09.926: INFO: Initial restart count of pod liveness-exec is 0 +Jun 18 11:50:00.035: INFO: Restart count of pod container-probe-7975/liveness-exec is now 1 (50.108852962s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:50:00.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7975" for this suite. +Jun 18 11:50:06.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:50:06.183: INFO: namespace container-probe-7975 deletion completed in 6.127307641s + +• [SLOW TEST:58.318 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:50:06.185: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-5202 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-5202 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5202 +Jun 18 11:50:06.242: INFO: Found 0 stateful pods, waiting for 1 +Jun 18 11:50:16.246: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Jun 18 11:50:16.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:50:16.496: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:50:16.496: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:50:16.496: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 11:50:16.499: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jun 18 11:50:26.504: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 11:50:26.504: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 11:50:26.519: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999035s +Jun 18 11:50:27.524: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996840818s +Jun 18 11:50:28.528: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992606481s +Jun 18 11:50:29.532: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988175164s +Jun 18 11:50:30.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984048018s +Jun 18 11:50:31.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979845163s +Jun 18 11:50:32.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.975540195s +Jun 18 11:50:33.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971522105s +Jun 18 11:50:34.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.966765674s +Jun 18 11:50:35.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 962.695171ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5202 +Jun 18 11:50:36.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:50:36.771: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:50:36.771: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:50:36.771: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 11:50:36.775: INFO: Found 1 stateful pods, waiting for 3 +Jun 18 11:50:46.780: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 11:50:46.780: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 18 11:50:46.780: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Jun 18 11:50:46.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:50:46.996: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:50:46.996: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:50:46.996: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 11:50:46.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:50:47.212: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:50:47.212: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:50:47.212: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 11:50:47.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 18 11:50:47.434: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 18 11:50:47.434: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 18 11:50:47.434: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 18 11:50:47.434: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 11:50:47.438: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Jun 18 11:50:57.446: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 11:50:57.446: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 11:50:57.446: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 18 11:50:57.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999068s +Jun 18 11:50:58.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996672812s +Jun 18 11:50:59.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992504429s +Jun 18 11:51:00.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988162605s +Jun 18 11:51:01.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98387617s +Jun 18 11:51:02.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979803901s +Jun 18 11:51:03.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975753148s +Jun 18 11:51:04.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971327434s +Jun 18 11:51:05.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967110655s +Jun 18 11:51:06.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.99701ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5202 +Jun 18 11:51:07.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:51:07.709: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:51:07.709: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:51:07.709: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 11:51:07.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:51:07.919: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:51:07.919: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:51:07.919: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 11:51:07.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 exec --namespace=statefulset-5202 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 18 11:51:08.148: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 18 11:51:08.148: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 18 11:51:08.148: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 18 11:51:08.148: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 18 11:51:28.164: INFO: Deleting all statefulset in ns statefulset-5202 +Jun 18 11:51:28.167: INFO: Scaling statefulset ss to 0 +Jun 18 11:51:28.176: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 18 11:51:28.178: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:51:28.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5202" for this suite. +Jun 18 11:51:34.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:51:34.323: INFO: namespace statefulset-5202 deletion completed in 6.12883873s + +• [SLOW TEST:88.139 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:51:34.323: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:51:37.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8905" for this suite. +Jun 18 11:51:59.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:51:59.520: INFO: namespace replication-controller-8905 deletion completed in 22.123576649s + +• [SLOW TEST:25.197 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:51:59.520: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:51:59.559: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jun 18 11:51:59.569: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jun 18 11:52:04.573: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 18 11:52:04.573: INFO: Creating deployment "test-rolling-update-deployment" +Jun 18 11:52:04.578: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jun 18 11:52:04.583: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jun 18 11:52:06.590: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jun 18 11:52:06.593: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 18 11:52:06.602: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7632,SelfLink:/apis/apps/v1/namespaces/deployment-7632/deployments/test-rolling-update-deployment,UID:7dbb24af-91bf-11e9-8d87-0a902858a792,ResourceVersion:47887,Generation:1,CreationTimestamp:2019-06-18 11:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-18 11:52:04 +0000 UTC 2019-06-18 11:52:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-18 11:52:06 +0000 UTC 2019-06-18 11:52:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-67599b4d9" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 18 11:52:06.605: INFO: New ReplicaSet "test-rolling-update-deployment-67599b4d9" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9,GenerateName:,Namespace:deployment-7632,SelfLink:/apis/apps/v1/namespaces/deployment-7632/replicasets/test-rolling-update-deployment-67599b4d9,UID:7dbdbf4f-91bf-11e9-8999-0a07e7e61ed8,ResourceVersion:47876,Generation:1,CreationTimestamp:2019-06-18 11:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7dbb24af-91bf-11e9-8d87-0a902858a792 0xc003465920 0xc003465921}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 18 11:52:06.605: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jun 18 11:52:06.605: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7632,SelfLink:/apis/apps/v1/namespaces/deployment-7632/replicasets/test-rolling-update-controller,UID:7abe0aa8-91bf-11e9-8d87-0a902858a792,ResourceVersion:47885,Generation:2,CreationTimestamp:2019-06-18 11:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7dbb24af-91bf-11e9-8d87-0a902858a792 0xc003465857 0xc003465858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 18 11:52:06.608: INFO: Pod "test-rolling-update-deployment-67599b4d9-k8q27" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9-k8q27,GenerateName:test-rolling-update-deployment-67599b4d9-,Namespace:deployment-7632,SelfLink:/api/v1/namespaces/deployment-7632/pods/test-rolling-update-deployment-67599b4d9-k8q27,UID:7dbe88a1-91bf-11e9-8999-0a07e7e61ed8,ResourceVersion:47875,Generation:0,CreationTimestamp:2019-06-18 11:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.42.0.232/32,},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-67599b4d9 7dbdbf4f-91bf-11e9-8999-0a07e7e61ed8 0xc002af01b0 0xc002af01b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4nmtr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4nmtr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4nmtr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-26-16-178,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002af0230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002af0250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:52:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:52:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:52:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-18 11:52:04 +0000 UTC }],Message:,Reason:,HostIP:172.26.16.178,PodIP:10.42.0.232,StartTime:2019-06-18 11:52:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-18 11:52:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d59c569a12e7f1638984ed471ef3e2a1401e1832b027ac673f347aa2bdd697a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:52:06.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7632" for this suite. +Jun 18 11:52:12.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:52:12.737: INFO: namespace deployment-7632 deletion completed in 6.124968132s + +• [SLOW TEST:13.217 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:52:12.737: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 18 11:52:12.786: INFO: Waiting up to 5m0s for pod "pod-829f3829-91bf-11e9-8aef-6ab77b36fff7" in namespace "emptydir-7280" to be "success or failure" +Jun 18 11:52:12.790: INFO: Pod "pod-829f3829-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084448ms +Jun 18 11:52:14.794: INFO: Pod "pod-829f3829-91bf-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008440557s +STEP: Saw pod success +Jun 18 11:52:14.794: INFO: Pod "pod-829f3829-91bf-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:52:14.797: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-829f3829-91bf-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:52:14.820: INFO: Waiting for pod pod-829f3829-91bf-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:52:14.823: INFO: Pod pod-829f3829-91bf-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:52:14.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7280" for this suite. +Jun 18 11:52:20.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:52:20.952: INFO: namespace emptydir-7280 deletion completed in 6.12282583s + +• [SLOW TEST:8.215 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run job + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:52:20.952: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1510 +[It] should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 18 11:52:20.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2772' +Jun 18 11:52:21.252: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 18 11:52:21.252: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" +STEP: verifying the job e2e-test-nginx-job was created +[AfterEach] [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1515 +Jun 18 11:52:21.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-675335780 delete jobs e2e-test-nginx-job --namespace=kubectl-2772' +Jun 18 11:52:21.338: INFO: stderr: "" +Jun 18 11:52:21.338: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:52:21.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2772" for this suite. +Jun 18 11:52:43.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:52:43.471: INFO: namespace kubectl-2772 deletion completed in 22.129078771s + +• [SLOW TEST:22.519 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:52:43.471: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:69 +[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Registering the sample API server. +Jun 18 11:52:44.180: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jun 18 11:52:46.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696455564, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696455564, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696455564, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696455564, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 18 11:52:48.950: INFO: Waited 720.169846ms for the sample-apiserver to be ready to handle requests. +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:60 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:52:49.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-2125" for this suite. +Jun 18 11:52:55.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:52:55.799: INFO: namespace aggregator-2125 deletion completed in 6.210861768s + +• [SLOW TEST:12.329 seconds] +[sig-api-machinery] Aggregator +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:52:55.800: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-downwardapi-ccb6 +STEP: Creating a pod to test atomic-volume-subpath +Jun 18 11:52:55.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ccb6" in namespace "subpath-883" to be "success or failure" +Jun 18 11:52:55.861: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046432ms +Jun 18 11:52:57.865: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.00992767s +Jun 18 11:52:59.869: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.014208905s +Jun 18 11:53:01.874: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.01868431s +Jun 18 11:53:03.878: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.023036236s +Jun 18 11:53:05.882: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.027243583s +Jun 18 11:53:07.886: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.031272573s +Jun 18 11:53:09.891: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.035603805s +Jun 18 11:53:11.895: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.040159632s +Jun 18 11:53:13.899: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.044236445s +Jun 18 11:53:15.903: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.048240665s +Jun 18 11:53:17.907: INFO: Pod "pod-subpath-test-downwardapi-ccb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.052321622s +STEP: Saw pod success +Jun 18 11:53:17.907: INFO: Pod "pod-subpath-test-downwardapi-ccb6" satisfied condition "success or failure" +Jun 18 11:53:17.910: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-subpath-test-downwardapi-ccb6 container test-container-subpath-downwardapi-ccb6: +STEP: delete the pod +Jun 18 11:53:17.931: INFO: Waiting for pod pod-subpath-test-downwardapi-ccb6 to disappear +Jun 18 11:53:17.935: INFO: Pod pod-subpath-test-downwardapi-ccb6 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-ccb6 +Jun 18 11:53:17.935: INFO: Deleting pod "pod-subpath-test-downwardapi-ccb6" in namespace "subpath-883" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:53:17.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-883" for this suite. +Jun 18 11:53:23.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:53:24.063: INFO: namespace subpath-883 deletion completed in 6.121324772s + +• [SLOW TEST:28.263 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:53:24.063: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:53:24.119: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Jun 18 11:53:24.133: INFO: Number of nodes with available pods: 0 +Jun 18 11:53:24.133: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:53:25.140: INFO: Number of nodes with available pods: 0 +Jun 18 11:53:25.140: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:53:26.140: INFO: Number of nodes with available pods: 3 +Jun 18 11:53:26.141: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Jun 18 11:53:26.166: INFO: Wrong image for pod: daemon-set-5rq47. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:26.166: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:26.166: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:27.175: INFO: Wrong image for pod: daemon-set-5rq47. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:27.175: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:27.175: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:28.176: INFO: Wrong image for pod: daemon-set-5rq47. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:28.176: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:28.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:29.176: INFO: Wrong image for pod: daemon-set-5rq47. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:29.176: INFO: Pod daemon-set-5rq47 is not available +Jun 18 11:53:29.176: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:29.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:30.175: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:30.175: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:30.175: INFO: Pod daemon-set-p98gw is not available +Jun 18 11:53:31.176: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:31.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:31.176: INFO: Pod daemon-set-p98gw is not available +Jun 18 11:53:32.176: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:32.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:33.176: INFO: Wrong image for pod: daemon-set-7hxxg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:33.176: INFO: Pod daemon-set-7hxxg is not available +Jun 18 11:53:33.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:34.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:34.176: INFO: Pod daemon-set-jzm7k is not available +Jun 18 11:53:35.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:35.176: INFO: Pod daemon-set-jzm7k is not available +Jun 18 11:53:36.175: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:37.176: INFO: Wrong image for pod: daemon-set-bhsr4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 18 11:53:37.176: INFO: Pod daemon-set-bhsr4 is not available +Jun 18 11:53:38.175: INFO: Pod daemon-set-bzt7t is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Jun 18 11:53:38.186: INFO: Number of nodes with available pods: 2 +Jun 18 11:53:38.186: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:53:39.194: INFO: Number of nodes with available pods: 2 +Jun 18 11:53:39.194: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:53:40.194: INFO: Number of nodes with available pods: 2 +Jun 18 11:53:40.194: INFO: Node ip-172-26-17-1 is running more than one daemon pod +Jun 18 11:53:41.194: INFO: Number of nodes with available pods: 3 +Jun 18 11:53:41.194: INFO: Number of running nodes: 3, number of available pods: 3 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9581, will wait for the garbage collector to delete the pods +Jun 18 11:53:41.270: INFO: Deleting DaemonSet.extensions daemon-set took: 8.224598ms +Jun 18 11:53:41.771: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.273137ms +Jun 18 11:53:47.379: INFO: Number of nodes with available pods: 0 +Jun 18 11:53:47.379: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 18 11:53:47.382: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9581/daemonsets","resourceVersion":"48393"},"items":null} + +Jun 18 11:53:47.384: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9581/pods","resourceVersion":"48393"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:53:47.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-9581" for this suite. +Jun 18 11:53:53.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:53:53.528: INFO: namespace daemonsets-9581 deletion completed in 6.127874375s + +• [SLOW TEST:29.465 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:53:53.528: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 18 11:53:56.101: INFO: Successfully updated pod "labelsupdatebeb1e9f1-91bf-11e9-8aef-6ab77b36fff7" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:53:58.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3079" for this suite. +Jun 18 11:54:20.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:54:20.247: INFO: namespace projected-3079 deletion completed in 22.125484656s + +• [SLOW TEST:26.719 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:54:20.247: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 18 11:54:20.309: INFO: Waiting up to 5m0s for pod "pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7" in namespace "emptydir-2759" to be "success or failure" +Jun 18 11:54:20.313: INFO: Pod "pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.523489ms +Jun 18 11:54:22.317: INFO: Pod "pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008572027s +STEP: Saw pod success +Jun 18 11:54:22.317: INFO: Pod "pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:54:22.320: INFO: Trying to get logs from node ip-172-26-16-178 pod pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:54:22.341: INFO: Waiting for pod pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:54:22.345: INFO: Pod pod-cea1a659-91bf-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:54:22.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2759" for this suite. +Jun 18 11:54:28.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:54:28.471: INFO: namespace emptydir-2759 deletion completed in 6.121875489s + +• [SLOW TEST:8.224 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:54:28.471: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test env composition +Jun 18 11:54:28.518: INFO: Waiting up to 5m0s for pod "var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7" in namespace "var-expansion-3629" to be "success or failure" +Jun 18 11:54:28.522: INFO: Pod "var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280533ms +Jun 18 11:54:30.527: INFO: Pod "var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008469037s +Jun 18 11:54:32.530: INFO: Pod "var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01231292s +STEP: Saw pod success +Jun 18 11:54:32.530: INFO: Pod "var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:54:32.533: INFO: Trying to get logs from node ip-172-26-30-38 pod var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7 container dapi-container: +STEP: delete the pod +Jun 18 11:54:32.554: INFO: Waiting for pod var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:54:32.557: INFO: Pod var-expansion-d3864c7c-91bf-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:54:32.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3629" for this suite. +Jun 18 11:54:38.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:54:38.688: INFO: namespace var-expansion-3629 deletion completed in 6.126498008s + +• [SLOW TEST:10.217 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:54:38.688: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating service multi-endpoint-test in namespace services-2221 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2221 to expose endpoints map[] +Jun 18 11:54:38.738: INFO: Get endpoints failed (2.710925ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found +Jun 18 11:54:39.742: INFO: successfully validated that service multi-endpoint-test in namespace services-2221 exposes endpoints map[] (1.006367467s elapsed) +STEP: Creating pod pod1 in namespace services-2221 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2221 to expose endpoints map[pod1:[100]] +Jun 18 11:54:42.779: INFO: successfully validated that service multi-endpoint-test in namespace services-2221 exposes endpoints map[pod1:[100]] (3.028795009s elapsed) +STEP: Creating pod pod2 in namespace services-2221 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2221 to expose endpoints map[pod1:[100] pod2:[101]] +Jun 18 11:54:45.825: INFO: successfully validated that service multi-endpoint-test in namespace services-2221 exposes endpoints map[pod1:[100] pod2:[101]] (3.040828613s elapsed) +STEP: Deleting pod pod1 in namespace services-2221 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2221 to expose endpoints map[pod2:[101]] +Jun 18 11:54:45.841: INFO: successfully validated that service multi-endpoint-test in namespace services-2221 exposes endpoints map[pod2:[101]] (7.971715ms elapsed) +STEP: Deleting pod pod2 in namespace services-2221 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2221 to expose endpoints map[] +Jun 18 11:54:46.854: INFO: successfully validated that service multi-endpoint-test in namespace services-2221 exposes endpoints map[] (1.006902893s elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:54:46.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2221" for this suite. +Jun 18 11:55:08.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:55:09.032: INFO: namespace services-2221 deletion completed in 22.149393616s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:30.344 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:55:09.032: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 18 11:55:09.084: INFO: Waiting up to 5m0s for pod "pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7" in namespace "emptydir-9732" to be "success or failure" +Jun 18 11:55:09.088: INFO: Pod "pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435039ms +Jun 18 11:55:11.092: INFO: Pod "pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007572583s +Jun 18 11:55:13.096: INFO: Pod "pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011552078s +STEP: Saw pod success +Jun 18 11:55:13.096: INFO: Pod "pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7" satisfied condition "success or failure" +Jun 18 11:55:13.100: INFO: Trying to get logs from node ip-172-26-30-38 pod pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7 container test-container: +STEP: delete the pod +Jun 18 11:55:13.121: INFO: Waiting for pod pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7 to disappear +Jun 18 11:55:13.127: INFO: Pod pod-ebb42982-91bf-11e9-8aef-6ab77b36fff7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:55:13.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9732" for this suite. +Jun 18 11:55:19.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:55:19.252: INFO: namespace emptydir-9732 deletion completed in 6.120866008s + +• [SLOW TEST:10.220 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 18 11:55:19.252: INFO: >>> kubeConfig: /tmp/kubeconfig-675335780 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 18 11:55:19.310: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Jun 18 11:55:19.320: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:19.320: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Jun 18 11:55:19.341: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:19.341: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:20.347: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:20.347: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:21.345: INFO: Number of nodes with available pods: 1 +Jun 18 11:55:21.345: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Jun 18 11:55:21.362: INFO: Number of nodes with available pods: 1 +Jun 18 11:55:21.362: INFO: Number of running nodes: 0, number of available pods: 1 +Jun 18 11:55:22.366: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:22.366: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Jun 18 11:55:22.374: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:22.374: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:23.378: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:23.378: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:24.381: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:24.381: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:25.378: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:25.378: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:26.378: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:26.378: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:27.379: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:27.379: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:28.378: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:28.378: INFO: Node ip-172-26-16-178 is running more than one daemon pod +Jun 18 11:55:29.378: INFO: Number of nodes with available pods: 1 +Jun 18 11:55:29.378: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5522, will wait for the garbage collector to delete the pods +Jun 18 11:55:29.445: INFO: Deleting DaemonSet.extensions daemon-set took: 7.997096ms +Jun 18 11:55:29.946: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.212683ms +Jun 18 11:55:37.249: INFO: Number of nodes with available pods: 0 +Jun 18 11:55:37.249: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 18 11:55:37.252: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5522/daemonsets","resourceVersion":"48876"},"items":null} + +Jun 18 11:55:37.254: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5522/pods","resourceVersion":"48876"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 18 11:55:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5522" for this suite. +Jun 18 11:55:43.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 18 11:55:43.401: INFO: namespace daemonsets-5522 deletion completed in 6.123912653s + +• [SLOW TEST:24.149 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSJun 18 11:55:43.401: INFO: Running AfterSuite actions on all nodes +Jun 18 11:55:43.401: INFO: Running AfterSuite actions on node 1 +Jun 18 11:55:43.401: INFO: Skipping dumping logs from cluster + +Ran 204 of 3585 Specs in 5302.349 seconds +SUCCESS! -- 204 Passed | 0 Failed | 0 Pending | 3381 Skipped PASS + +Ginkgo ran 1 suite in 1h28m23.426994787s +Test Suite Passed diff --git a/v1.14/rancher/junit_01.xml b/v1.14/rancher/junit_01.xml new file mode 100644 index 0000000000..5e3cba1b1a --- /dev/null +++ b/v1.14/rancher/junit_01.xml @@ -0,0 +1,10350 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.14/rancher/sonobuoy.tar.gz b/v1.14/rancher/sonobuoy.tar.gz new file mode 100644 index 0000000000..a6a228ccb8 Binary files /dev/null and b/v1.14/rancher/sonobuoy.tar.gz differ