+STEP: delete the pod
+Jun 18 08:08:57.702: INFO: Waiting for pod downwardapi-volume-515383c8-91a0-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:08:57.704: INFO: Pod downwardapi-volume-515383c8-91a0-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:08:57.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-9k6h2" for this suite.
+Jun 18 08:09:05.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:09:06.536: INFO: namespace: e2e-tests-projected-9k6h2, resource: bindings, ignored listing per whitelist
+Jun 18 08:09:06.632: INFO: namespace e2e-tests-projected-9k6h2 deletion completed in 8.925422858s
+
+• [SLOW TEST:11.500 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+ should provide podname only [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[k8s.io] Pods
+ should be updated [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:09:06.633: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-ssqnx
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should be updated [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Jun 18 08:09:10.526: INFO: Successfully updated pod "pod-update-586482cd-91a0-11e9-bbf5-0e74dabf3615"
+STEP: verifying the updated pod is in kubernetes
+Jun 18 08:09:10.533: INFO: Pod update OK
+[AfterEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:09:10.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-ssqnx" for this suite.
+Jun 18 08:09:34.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:09:34.636: INFO: namespace: e2e-tests-pods-ssqnx, resource: bindings, ignored listing per whitelist
+Jun 18 08:09:35.526: INFO: namespace e2e-tests-pods-ssqnx deletion completed in 24.984320651s
+
+• [SLOW TEST:28.893 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+ should be updated [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes
+ should support subpaths with configmap pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:09:35.526: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename subpath
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-subpath-8xss4
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with configmap pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-configmap-h9fv
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 18 08:09:35.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h9fv" in namespace "e2e-tests-subpath-8xss4" to be "success or failure"
+Jun 18 08:09:35.742: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628331ms
+Jun 18 08:09:38.511: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771477878s
+Jun 18 08:09:40.516: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 4.776416732s
+Jun 18 08:09:42.519: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 6.779239417s
+Jun 18 08:09:44.522: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 8.782665759s
+Jun 18 08:09:46.526: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 10.786145441s
+Jun 18 08:09:48.533: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 12.793421249s
+Jun 18 08:09:50.536: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 14.796922743s
+Jun 18 08:09:52.539: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 16.799952097s
+Jun 18 08:09:54.542: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 18.80300584s
+Jun 18 08:09:56.546: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Running", Reason="", readiness=false. Elapsed: 20.806050503s
+Jun 18 08:09:58.549: INFO: Pod "pod-subpath-test-configmap-h9fv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.809227901s
+STEP: Saw pod success
+Jun 18 08:09:58.549: INFO: Pod "pod-subpath-test-configmap-h9fv" satisfied condition "success or failure"
+Jun 18 08:09:58.551: INFO: Trying to get logs from node node5 pod pod-subpath-test-configmap-h9fv container test-container-subpath-configmap-h9fv:
+STEP: delete the pod
+Jun 18 08:09:58.568: INFO: Waiting for pod pod-subpath-test-configmap-h9fv to disappear
+Jun 18 08:09:58.571: INFO: Pod pod-subpath-test-configmap-h9fv no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-h9fv
+Jun 18 08:09:58.571: INFO: Deleting pod "pod-subpath-test-configmap-h9fv" in namespace "e2e-tests-subpath-8xss4"
+[AfterEach] [sig-storage] Subpath
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:09:58.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-8xss4" for this suite.
+Jun 18 08:10:06.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:10:06.648: INFO: namespace: e2e-tests-subpath-8xss4, resource: bindings, ignored listing per whitelist
+Jun 18 08:10:07.544: INFO: namespace e2e-tests-subpath-8xss4 deletion completed in 8.961514281s
+
+• [SLOW TEST:32.018 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+ Atomic writer volumes
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+ should support subpaths with configmap pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] PreStop
+ should call prestop when killing a pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] [sig-node] PreStop
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:10:07.544: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename prestop
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-prestop-lxbch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should call prestop when killing a pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating server pod server in namespace e2e-tests-prestop-lxbch
+STEP: Waiting for pods to come up.
+STEP: Creating tester pod tester in namespace e2e-tests-prestop-lxbch
+STEP: Deleting pre-stop pod
+Jun 18 08:10:23.559: INFO: Saw: {
+ "Hostname": "server",
+ "Sent": null,
+ "Received": {
+ "prestop": 1
+ },
+ "Errors": null,
+ "Log": [
+ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
+ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
+ ],
+ "StillContactingPeers": true
+}
+STEP: Deleting the server pod
+[AfterEach] [k8s.io] [sig-node] PreStop
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:10:23.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-prestop-lxbch" for this suite.
+Jun 18 08:11:07.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:11:08.567: INFO: namespace: e2e-tests-prestop-lxbch, resource: bindings, ignored listing per whitelist
+Jun 18 08:11:09.646: INFO: namespace e2e-tests-prestop-lxbch deletion completed in 46.073943403s
+
+• [SLOW TEST:62.102 seconds]
+[k8s.io] [sig-node] PreStop
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+ should call prestop when killing a pod [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1
+ should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] version v1
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:11:09.646: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename proxy
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-proxy-t89sv
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 18 08:11:12.510: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log
+anaconda/
+audit/
+boot.log>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename daemonsets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-daemonsets-9n6zh
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should rollback without unnecessary restarts [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 18 08:11:26.552: INFO: Requires at least 2 nodes (not -1)
+[AfterEach] [sig-apps] Daemon set [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+Jun 18 08:11:26.561: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9n6zh/daemonsets","resourceVersion":"13546360"},"items":null}
+
+Jun 18 08:11:26.563: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9n6zh/pods","resourceVersion":"13546360"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:11:26.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-daemonsets-9n6zh" for this suite.
+Jun 18 08:11:38.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:11:40.579: INFO: namespace: e2e-tests-daemonsets-9n6zh, resource: bindings, ignored listing per whitelist
+Jun 18 08:11:40.643: INFO: namespace e2e-tests-daemonsets-9n6zh deletion completed in 14.059719541s
+
+S [SKIPPING] [15.066 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+ should rollback without unnecessary restarts [Conformance] [It]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+
+ Jun 18 08:11:26.552: Requires at least 2 nodes (not -1)
+
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap
+ should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:11:40.643: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-fpn9m
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-map-b43fc292-91a0-11e9-bbf5-0e74dabf3615
+STEP: Creating a pod to test consume configMaps
+Jun 18 08:11:42.602: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-projected-fpn9m" to be "success or failure"
+Jun 18 08:11:42.609: INFO: Pod "pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 6.918594ms
+Jun 18 08:11:44.720: INFO: Pod "pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11807929s
+Jun 18 08:11:47.546: INFO: Pod "pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.943771642s
+STEP: Saw pod success
+Jun 18 08:11:47.546: INFO: Pod "pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:11:47.567: INFO: Trying to get logs from node node5 pod pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615 container projected-configmap-volume-test:
+STEP: delete the pod
+Jun 18 08:11:48.550: INFO: Waiting for pod pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:11:48.559: INFO: Pod pod-projected-configmaps-b4c9f1e0-91a0-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:11:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-fpn9m" for this suite.
+Jun 18 08:11:56.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:11:56.724: INFO: namespace: e2e-tests-projected-fpn9m, resource: bindings, ignored listing per whitelist
+Jun 18 08:11:56.934: INFO: namespace e2e-tests-projected-fpn9m deletion completed in 8.349422853s
+
+• [SLOW TEST:16.291 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+ should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial]
+ validates resource limits of pods that are allowed to run [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:11:56.934: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename sched-pred
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-sched-pred-zhl9d
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
+Jun 18 08:11:57.511: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun 18 08:11:57.518: INFO: Waiting for terminating namespaces to be deleted...
+Jun 18 08:11:57.520: INFO:
+Logging pods the kubelet thinks is on node node1 before test
+Jun 18 08:11:57.542: INFO: qce-postgres-stolon-sentinel-b6bcb4448-gch5x from qce started at 2019-06-04 11:39:26 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: prometheus-operator-prometheus-node-exporter-jd657 from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container node-exporter ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: qce-etcd-5665b647b-cjlnd from qce started at 2019-06-04 11:39:26 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container qce-etcd-etcd ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: mongorsdata-operator-54b67c6cc5-fh4r4 from qiniu-mongors started at 2019-06-04 11:39:26 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container mongors-operator ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: csi-rbd-ceph-csi-rbd-nodeplugin-r97x2 from default started at 2019-05-14 08:47:33 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: qce-authzhook-deploy-75cbd8bc4b-wd28x from qce started at 2019-05-14 10:16:10 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container qce-authzhook ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: prometheus-prometheus-operator-prometheus-0 from kube-system started at 2019-06-15 09:23:36 +0000 UTC (3 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container prometheus ready: true, restart count 1
+Jun 18 08:11:57.542: INFO: Container prometheus-config-reloader ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: Container rules-configmap-reloader ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: csirbd-demo-pod from default started at 2019-05-14 08:50:23 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container web-server ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: redisdata-operator-cdd96dd96-mxcw6 from qiniu-redis started at 2019-06-04 11:39:27 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container redis-operator ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: qce-mongo-deploy-65f555f54f-2td5v from qce started at 2019-06-04 11:39:26 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container qce-mongo ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: kube-proxy-4kq5g from kube-system started at 2019-05-14 05:39:01 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container kube-proxy ready: true, restart count 2
+Jun 18 08:11:57.542: INFO: calico-node-87wc8 from kube-system started at 2019-05-14 06:16:49 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container calico-node ready: true, restart count 2
+Jun 18 08:11:57.542: INFO: Container install-cni ready: true, restart count 2
+Jun 18 08:11:57.542: INFO: csi-cephfs-ceph-csi-cephfs-provisioner-0 from default started at 2019-05-14 08:47:42 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: Container csi-provisioner ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: qce-postgres-stolon-keeper-1 from qce started at 2019-05-14 09:40:52 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: alert-dispatcher-58d448f9c9-t5npr from kube-system started at 2019-06-04 11:39:26 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container alert-dispatcher ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: csi-cephfs-ceph-csi-cephfs-nodeplugin-2smn4 from default started at 2019-05-14 08:47:42 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: logkit-poc-dk8x2 from kube-system started at 2019-05-17 03:17:51 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container logkit-poc ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: alertmanager-prometheus-operator-alertmanager-1 from kube-system started at 2019-06-15 05:36:24 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.542: INFO: Container alertmanager ready: true, restart count 0
+Jun 18 08:11:57.542: INFO: Container config-reloader ready: true, restart count 0
+Jun 18 08:11:57.542: INFO:
+Logging pods the kubelet thinks is on node node2 before test
+Jun 18 08:11:57.557: INFO: logkit-poc-cgpj8 from kube-system started at 2019-05-17 03:17:51 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container logkit-poc ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: kube-proxy-hm6bg from kube-system started at 2019-05-14 05:39:31 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container kube-proxy ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: alert-controller-568fb6794d-f9vhm from kube-system started at 2019-06-14 01:20:22 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container alert-controller ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: qce-postgres-stolon-keeper-0 from qce started at 2019-06-14 23:07:51 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: redis-operator-b7597fc6c-fhsq9 from qiniu-redis started at 2019-06-06 05:55:00 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container redis-operator ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: kibana-58f596b5d4-gprzs from kube-system started at 2019-06-09 10:42:30 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container kibana ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: csi-rbd-ceph-csi-rbd-provisioner-0 from default started at 2019-06-15 04:42:56 +0000 UTC (3 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container csi-provisioner ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: Container csi-snapshotter ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: prometheus-operator-prometheus-node-exporter-ctlvb from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container node-exporter ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: calico-kube-controllers-5ffbcb76cf-km64s from kube-system started at 2019-06-06 06:34:55 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container calico-kube-controllers ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: qce-clair-6f69f7554d-2hpxb from qce started at 2019-06-08 07:24:41 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container clair ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: csi-cephfs-ceph-csi-cephfs-attacher-0 from default started at 2019-05-14 08:47:42 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container csi-cephfsplugin-attacher ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: rabbitmq-operator-845b85b447-qx5nm from qiniu-rabbitmq started at 2019-06-15 05:32:51 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container rabbitmq-operator ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: calico-node-vfj4h from kube-system started at 2019-05-14 06:16:49 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container calico-node ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: Container install-cni ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: csi-cephfs-ceph-csi-cephfs-nodeplugin-c2hjw from default started at 2019-05-14 08:47:42 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: alert-apiserver-5f887ff458-dcdcn from kube-system started at 2019-06-13 06:50:57 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container alert-apiserver ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: csi-rbd-ceph-csi-rbd-nodeplugin-mncbd from default started at 2019-05-14 08:47:33 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.557: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.557: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.557: INFO:
+Logging pods the kubelet thinks is on node node3 before test
+Jun 18 08:11:57.569: INFO: prometheus-operator-prometheus-node-exporter-84pmd from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container node-exporter ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: qce-jenkins-0 from qce started at 2019-06-16 18:40:16 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container qce-jenkins ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: csi-rbd-ceph-csi-rbd-nodeplugin-gxvpm from default started at 2019-05-14 08:47:33 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: logkit-poc-znzg2 from kube-system started at 2019-06-18 06:27:20 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container logkit-poc ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: kube-proxy-tc77p from kube-system started at 2019-05-14 05:38:50 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container kube-proxy ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: calico-node-mzvzv from kube-system started at 2019-05-14 06:16:49 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container calico-node ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: Container install-cni ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: mongors-operator-65df599b-wjs4w from qiniu-mongors started at 2019-06-04 11:39:27 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container mongors-operator ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: qce-portal-deploy-6d799f79df-5lsgc from qce started at 2019-06-17 04:26:28 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container qce-portal ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: tiller-deploy-555696dfc8-gvznf from kube-system started at 2019-05-14 08:33:12 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container tiller ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: csi-cephfs-ceph-csi-cephfs-nodeplugin-tnz48 from default started at 2019-05-14 08:47:42 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: qce-postgres-stolon-sentinel-b6bcb4448-c4nmj from qce started at 2019-05-14 09:40:16 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: qce-postgres-stolon-proxy-78b9bc58d8-pg92h from qce started at 2019-05-14 09:40:16 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: prometheus-operator-prometheus-blackbox-exporter-5d4cbbf54vzmk6 from kube-system started at 2019-05-16 08:39:36 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container blackbox-exporter ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: Container configmap-reload ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: prometheus-operator-kube-state-metrics-969f69894-p5bbm from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container kube-state-metrics ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: prometheus-operator-grafana-86b99c77dd-cmbdv from kube-system started at 2019-05-16 08:39:36 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container grafana ready: true, restart count 0
+Jun 18 08:11:57.569: INFO: Container grafana-sc-dashboard ready: true, restart count 39
+Jun 18 08:11:57.569: INFO: alert-apiserver-etcd-6d744f7648-llfwf from kube-system started at 2019-06-13 06:49:42 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.569: INFO: Container alert-apiserver-etcd ready: true, restart count 0
+Jun 18 08:11:57.569: INFO:
+Logging pods the kubelet thinks is on node node4 before test
+Jun 18 08:11:57.578: INFO: csi-rbd-ceph-csi-rbd-nodeplugin-q2jtp from default started at 2019-06-16 19:51:06 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: kirk-apiserver-doc-6b5f8c7dd8-lm2pv from qce started at 2019-06-18 05:42:55 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container kirk-apiserver-doc ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: logkit-poc-7shgm from kube-system started at 2019-06-16 19:36:14 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container logkit-poc ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: csi-cephfs-ceph-csi-cephfs-nodeplugin-7cg42 from default started at 2019-06-16 19:50:32 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: mysqldata-operator-6f447687b6-qdkt8 from qiniu-mysql started at 2019-06-18 03:17:07 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container mysql-operator ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: prometheus-operator-prometheus-node-exporter-f2zgm from kube-system started at 2019-06-16 19:39:12 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container node-exporter ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: kube-proxy-2vsgc from kube-system started at 2019-06-16 19:50:32 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container kube-proxy ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: mysql-operator-v2-645fcc7f6c-l9dtm from qiniu-mysql started at 2019-06-18 03:19:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container mysql-operator ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: elasticsearch-c5cc84d5f-ctdmq from kube-system started at 2019-06-18 06:26:40 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container elasticsearch ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: Container es-rotate ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: calico-node-fhsvk from kube-system started at 2019-06-16 19:53:03 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.578: INFO: Container calico-node ready: true, restart count 0
+Jun 18 08:11:57.578: INFO: Container install-cni ready: true, restart count 0
+Jun 18 08:11:57.578: INFO:
+Logging pods the kubelet thinks is on node node5 before test
+Jun 18 08:11:57.587: INFO: calico-node-fmzrt from kube-system started at 2019-05-14 06:16:49 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container calico-node ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: Container install-cni ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: csi-cephfs-ceph-csi-cephfs-nodeplugin-jfmbb from default started at 2019-05-14 08:47:42 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container csi-cephfsplugin ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: qce-postgres-stolon-proxy-78b9bc58d8-8pp2x from qce started at 2019-05-14 09:40:16 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: onetimeurl-controller-745fc87d5d-g58jg from qce started at 2019-05-14 10:16:10 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container onetimeurl-controller ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: logkit-poc-5z5cm from kube-system started at 2019-05-17 03:17:51 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container logkit-poc ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: csi-rbd-ceph-csi-rbd-nodeplugin-42fl8 from default started at 2019-05-14 08:47:33 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container csi-rbdplugin ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: Container driver-registrar ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: csi-rbd-ceph-csi-rbd-attacher-0 from default started at 2019-05-14 08:47:33 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container csi-rbdplugin-attacher ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-18 07:13:06 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container kube-sonobuoy ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: alertmanager-prometheus-operator-alertmanager-0 from kube-system started at 2019-05-16 08:39:44 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container alertmanager ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: Container config-reloader ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: sonobuoy-e2e-job-2b96015867f64622 from heptio-sonobuoy started at 2019-06-18 07:13:12 +0000 UTC (2 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container e2e ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: Container sonobuoy-worker ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: qce-user-manual-deploy-867778f667-dcl87 from qce started at 2019-05-27 12:26:46 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container qce-user-manual ready: true, restart count 0
+Jun 18 08:11:57.587: INFO: prometheus-operator-prometheus-node-exporter-9g6lb from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.587: INFO: Container node-exporter ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: prometheus-prometheus-operator-prometheus-1 from kube-system started at 2019-06-13 11:42:12 +0000 UTC (3 container statuses recorded)
+Jun 18 08:11:57.588: INFO: Container prometheus ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: Container prometheus-config-reloader ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: Container rules-configmap-reloader ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: alert-dispatcher-58d448f9c9-4mxgj from kube-system started at 2019-06-15 12:19:08 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.588: INFO: Container alert-dispatcher ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: kube-proxy-lqpj7 from kube-system started at 2019-05-14 05:38:48 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.588: INFO: Container kube-proxy ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: qce-postgres-stolon-sentinel-b6bcb4448-jbrkl from qce started at 2019-05-14 09:40:16 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.588: INFO: Container stolon ready: true, restart count 0
+Jun 18 08:11:57.588: INFO: prometheus-operator-operator-654b9d4648-lflhd from kube-system started at 2019-05-16 08:39:36 +0000 UTC (1 container statuses recorded)
+Jun 18 08:11:57.588: INFO: Container prometheus-operator ready: true, restart count 0
+[It] validates resource limits of pods that are allowed to run [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: verifying the node has the label node node1
+STEP: verifying the node has the label node node2
+STEP: verifying the node has the label node node3
+STEP: verifying the node has the label node node4
+STEP: verifying the node has the label node node5
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-attacher-0 requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-nodeplugin-2smn4 requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-nodeplugin-7cg42 requesting resource cpu=0m on Node node4
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-nodeplugin-c2hjw requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-nodeplugin-jfmbb requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-nodeplugin-tnz48 requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod csi-cephfs-ceph-csi-cephfs-provisioner-0 requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-attacher-0 requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-nodeplugin-42fl8 requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-nodeplugin-gxvpm requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-nodeplugin-mncbd requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-nodeplugin-q2jtp requesting resource cpu=0m on Node node4
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-nodeplugin-r97x2 requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod csi-rbd-ceph-csi-rbd-provisioner-0 requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod csirbd-demo-pod requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod sonobuoy requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod sonobuoy-e2e-job-2b96015867f64622 requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod alert-apiserver-5f887ff458-dcdcn requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod alert-apiserver-etcd-6d744f7648-llfwf requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod alert-controller-568fb6794d-f9vhm requesting resource cpu=200m on Node node2
+Jun 18 08:11:57.652: INFO: Pod alert-dispatcher-58d448f9c9-4mxgj requesting resource cpu=200m on Node node5
+Jun 18 08:11:57.652: INFO: Pod alert-dispatcher-58d448f9c9-t5npr requesting resource cpu=200m on Node node1
+Jun 18 08:11:57.652: INFO: Pod alertmanager-prometheus-operator-alertmanager-0 requesting resource cpu=5m on Node node5
+Jun 18 08:11:57.652: INFO: Pod alertmanager-prometheus-operator-alertmanager-1 requesting resource cpu=5m on Node node1
+Jun 18 08:11:57.652: INFO: Pod calico-kube-controllers-5ffbcb76cf-km64s requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod calico-node-87wc8 requesting resource cpu=250m on Node node1
+Jun 18 08:11:57.652: INFO: Pod calico-node-fhsvk requesting resource cpu=250m on Node node4
+Jun 18 08:11:57.652: INFO: Pod calico-node-fmzrt requesting resource cpu=250m on Node node5
+Jun 18 08:11:57.652: INFO: Pod calico-node-mzvzv requesting resource cpu=250m on Node node3
+Jun 18 08:11:57.652: INFO: Pod calico-node-vfj4h requesting resource cpu=250m on Node node2
+Jun 18 08:11:57.652: INFO: Pod elasticsearch-c5cc84d5f-ctdmq requesting resource cpu=2000m on Node node4
+Jun 18 08:11:57.652: INFO: Pod kibana-58f596b5d4-gprzs requesting resource cpu=100m on Node node2
+Jun 18 08:11:57.652: INFO: Pod kube-proxy-2vsgc requesting resource cpu=0m on Node node4
+Jun 18 08:11:57.652: INFO: Pod kube-proxy-4kq5g requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod kube-proxy-hm6bg requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod kube-proxy-lqpj7 requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod kube-proxy-tc77p requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod logkit-poc-5z5cm requesting resource cpu=512m on Node node5
+Jun 18 08:11:57.652: INFO: Pod logkit-poc-7shgm requesting resource cpu=512m on Node node4
+Jun 18 08:11:57.652: INFO: Pod logkit-poc-cgpj8 requesting resource cpu=512m on Node node2
+Jun 18 08:11:57.652: INFO: Pod logkit-poc-dk8x2 requesting resource cpu=512m on Node node1
+Jun 18 08:11:57.652: INFO: Pod logkit-poc-znzg2 requesting resource cpu=512m on Node node3
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-grafana-86b99c77dd-cmbdv requesting resource cpu=1050m on Node node3
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-kube-state-metrics-969f69894-p5bbm requesting resource cpu=100m on Node node3
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-operator-654b9d4648-lflhd requesting resource cpu=100m on Node node5
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-blackbox-exporter-5d4cbbf54vzmk6 requesting resource cpu=200m on Node node3
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-node-exporter-84pmd requesting resource cpu=100m on Node node3
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-node-exporter-9g6lb requesting resource cpu=100m on Node node5
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-node-exporter-ctlvb requesting resource cpu=100m on Node node2
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-node-exporter-f2zgm requesting resource cpu=100m on Node node4
+Jun 18 08:11:57.652: INFO: Pod prometheus-operator-prometheus-node-exporter-jd657 requesting resource cpu=100m on Node node1
+Jun 18 08:11:57.652: INFO: Pod prometheus-prometheus-operator-prometheus-0 requesting resource cpu=20m on Node node1
+Jun 18 08:11:57.652: INFO: Pod prometheus-prometheus-operator-prometheus-1 requesting resource cpu=20m on Node node5
+Jun 18 08:11:57.652: INFO: Pod tiller-deploy-555696dfc8-gvznf requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod kirk-apiserver-doc-6b5f8c7dd8-lm2pv requesting resource cpu=0m on Node node4
+Jun 18 08:11:57.652: INFO: Pod onetimeurl-controller-745fc87d5d-g58jg requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod qce-authzhook-deploy-75cbd8bc4b-wd28x requesting resource cpu=500m on Node node1
+Jun 18 08:11:57.652: INFO: Pod qce-clair-6f69f7554d-2hpxb requesting resource cpu=100m on Node node2
+Jun 18 08:11:57.652: INFO: Pod qce-etcd-5665b647b-cjlnd requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod qce-jenkins-0 requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod qce-mongo-deploy-65f555f54f-2td5v requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod qce-portal-deploy-6d799f79df-5lsgc requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-keeper-0 requesting resource cpu=0m on Node node2
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-keeper-1 requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-proxy-78b9bc58d8-8pp2x requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-proxy-78b9bc58d8-pg92h requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-sentinel-b6bcb4448-c4nmj requesting resource cpu=0m on Node node3
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-sentinel-b6bcb4448-gch5x requesting resource cpu=0m on Node node1
+Jun 18 08:11:57.652: INFO: Pod qce-postgres-stolon-sentinel-b6bcb4448-jbrkl requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod qce-user-manual-deploy-867778f667-dcl87 requesting resource cpu=0m on Node node5
+Jun 18 08:11:57.652: INFO: Pod mongors-operator-65df599b-wjs4w requesting resource cpu=300m on Node node3
+Jun 18 08:11:57.652: INFO: Pod mongorsdata-operator-54b67c6cc5-fh4r4 requesting resource cpu=300m on Node node1
+Jun 18 08:11:57.652: INFO: Pod mysql-operator-v2-645fcc7f6c-l9dtm requesting resource cpu=300m on Node node4
+Jun 18 08:11:57.652: INFO: Pod mysqldata-operator-6f447687b6-qdkt8 requesting resource cpu=300m on Node node4
+Jun 18 08:11:57.652: INFO: Pod rabbitmq-operator-845b85b447-qx5nm requesting resource cpu=300m on Node node2
+Jun 18 08:11:57.652: INFO: Pod redis-operator-b7597fc6c-fhsq9 requesting resource cpu=300m on Node node2
+Jun 18 08:11:57.652: INFO: Pod redisdata-operator-cdd96dd96-mxcw6 requesting resource cpu=300m on Node node1
+STEP: Starting Pods to consume most of the cluster CPU.
+STEP: Creating another pod that requires unavailable amount of CPU.
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615.15a93cba8548ee21], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zhl9d/filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615 to node1]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615.15a93cbac1673828], Reason = [Pulling], Message = [pulling image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615.15a93cbaffc7758e], Reason = [Pulled], Message = [Successfully pulled image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615.15a93cbb00cff87f], Reason = [Created], Message = [Created container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcb6021-91a0-11e9-bbf5-0e74dabf3615.15a93cbb05753027], Reason = [Started], Message = [Started container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615.15a93cba85c2a670], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zhl9d/filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615 to node2]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615.15a93cbac2206d11], Reason = [Pulling], Message = [pulling image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615.15a93cbb061929d3], Reason = [Pulled], Message = [Successfully pulled image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615.15a93cbb075f6f09], Reason = [Created], Message = [Created container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc229d-91a0-11e9-bbf5-0e74dabf3615.15a93cbb0b35e7f5], Reason = [Started], Message = [Started container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615.15a93cba85c178d4], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zhl9d/filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615 to node3]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615.15a93cbabf630f53], Reason = [Pulling], Message = [pulling image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615.15a93cbb796c6856], Reason = [Pulled], Message = [Successfully pulled image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615.15a93cbb7aa02bcd], Reason = [Created], Message = [Created container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcc9808-91a0-11e9-bbf5-0e74dabf3615.15a93cbb7f57ca3a], Reason = [Started], Message = [Started container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615.15a93cba85e49833], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zhl9d/filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615 to node4]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615.15a93cbabfa9f19e], Reason = [Pulling], Message = [pulling image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615.15a93cbb743b4ac0], Reason = [Pulled], Message = [Successfully pulled image "reg.kpaas.io/pause:3.1"]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615.15a93cbb756d3060], Reason = [Created], Message = [Created container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcd0b57-91a0-11e9-bbf5-0e74dabf3615.15a93cbb79e8e93d], Reason = [Started], Message = [Started container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcdde02-91a0-11e9-bbf5-0e74dabf3615.15a93cba86536680], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zhl9d/filler-pod-bdcdde02-91a0-11e9-bbf5-0e74dabf3615 to node5]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcdde02-91a0-11e9-bbf5-0e74dabf3615.15a93cbabfbd7f10], Reason = [Pulled], Message = [Container image "reg.kpaas.io/pause:3.1" already present on machine]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcdde02-91a0-11e9-bbf5-0e74dabf3615.15a93cbac1966bfb], Reason = [Created], Message = [Created container]
+STEP: Considering event:
+Type = [Normal], Name = [filler-pod-bdcdde02-91a0-11e9-bbf5-0e74dabf3615.15a93cbac603fa38], Reason = [Started], Message = [Started container]
+STEP: Considering event:
+Type = [Warning], Name = [additional-pod.15a93cbbedab88b4], Reason = [FailedScheduling], Message = [0/6 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 5 Insufficient cpu.]
+STEP: removing the label node off the node node3
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node node4
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node node5
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node node1
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node node2
+STEP: verifying the node doesn't have the label node
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:12:05.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-sched-pred-zhl9d" for this suite.
+Jun 18 08:12:13.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:12:14.662: INFO: namespace: e2e-tests-sched-pred-zhl9d, resource: bindings, ignored listing per whitelist
+Jun 18 08:12:14.670: INFO: namespace e2e-tests-sched-pred-zhl9d deletion completed in 9.015770992s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70
+
+• [SLOW TEST:17.736 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
+ validates resource limits of pods that are allowed to run [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[k8s.io] KubeletManagedEtcHosts
+ should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] KubeletManagedEtcHosts
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:12:14.670: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-e2e-kubelet-etc-hosts-tlk8c
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Setting up the test
+STEP: Creating hostNetwork=false pod
+STEP: Creating hostNetwork=true pod
+STEP: Running the test
+STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
+Jun 18 08:12:19.553: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:19.553: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:19.654: INFO: Exec stderr: ""
+Jun 18 08:12:19.654: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:19.654: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:20.511: INFO: Exec stderr: ""
+Jun 18 08:12:20.511: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:20.511: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:20.590: INFO: Exec stderr: ""
+Jun 18 08:12:20.590: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:20.590: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:21.509: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
+Jun 18 08:12:21.509: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:21.509: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:21.591: INFO: Exec stderr: ""
+Jun 18 08:12:21.591: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:21.591: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:21.665: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
+Jun 18 08:12:21.665: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:21.665: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:21.748: INFO: Exec stderr: ""
+Jun 18 08:12:21.748: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:21.748: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:21.819: INFO: Exec stderr: ""
+Jun 18 08:12:21.820: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:21.820: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:22.511: INFO: Exec stderr: ""
+Jun 18 08:12:22.511: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tlk8c PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 18 08:12:22.511: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+Jun 18 08:12:22.592: INFO: Exec stderr: ""
+[AfterEach] [k8s.io] KubeletManagedEtcHosts
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:12:22.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-tlk8c" for this suite.
+Jun 18 08:13:12.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:13:12.730: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-tlk8c, resource: bindings, ignored listing per whitelist
+Jun 18 08:13:12.918: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-tlk8c deletion completed in 50.321948327s
+
+• [SLOW TEST:58.248 seconds]
+[k8s.io] KubeletManagedEtcHosts
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+ should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] Garbage collector
+ should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:13:12.918: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename gc
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-gc-8dfpd
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
+STEP: Gathering metrics
+W0618 08:13:44.557667 16 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Jun 18 08:13:44.557: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:13:44.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-8dfpd" for this suite.
+Jun 18 08:13:52.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:13:52.623: INFO: namespace: e2e-tests-gc-8dfpd, resource: bindings, ignored listing per whitelist
+Jun 18 08:13:53.518: INFO: namespace e2e-tests-gc-8dfpd deletion completed in 8.9576611s
+
+• [SLOW TEST:40.600 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+ should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] ConfigMap
+ updates should be reflected in volume [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:13:53.519: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename configmap
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-configmap-5gb2j
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-upd-02f7354c-91a1-11e9-bbf5-0e74dabf3615
+STEP: Creating the pod
+STEP: Updating configmap configmap-test-upd-02f7354c-91a1-11e9-bbf5-0e74dabf3615
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:15:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-5gb2j" for this suite.
+Jun 18 08:15:42.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:15:43.664: INFO: namespace: e2e-tests-configmap-5gb2j, resource: bindings, ignored listing per whitelist
+Jun 18 08:15:45.589: INFO: namespace e2e-tests-configmap-5gb2j deletion completed in 28.766089894s
+
+• [SLOW TEST:112.071 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+ updates should be reflected in volume [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes
+ should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:15:45.590: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-emptydir-wrapper-k9q29
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating 50 configmaps
+STEP: Creating RC which spawns configmap-volume pods
+Jun 18 08:15:47.601: INFO: Pod name wrapped-volume-race-46d38815-91a1-11e9-bbf5-0e74dabf3615: Found 0 pods out of 5
+Jun 18 08:15:52.606: INFO: Pod name wrapped-volume-race-46d38815-91a1-11e9-bbf5-0e74dabf3615: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-46d38815-91a1-11e9-bbf5-0e74dabf3615 in namespace e2e-tests-emptydir-wrapper-k9q29, will wait for the garbage collector to delete the pods
+Jun 18 08:17:48.694: INFO: Deleting ReplicationController wrapped-volume-race-46d38815-91a1-11e9-bbf5-0e74dabf3615 took: 7.098196ms
+Jun 18 08:17:48.795: INFO: Terminating ReplicationController wrapped-volume-race-46d38815-91a1-11e9-bbf5-0e74dabf3615 pods took: 100.254057ms
+STEP: Creating RC which spawns configmap-volume pods
+Jun 18 08:18:28.907: INFO: Pod name wrapped-volume-race-a6fe3c80-91a1-11e9-bbf5-0e74dabf3615: Found 0 pods out of 5
+Jun 18 08:18:34.525: INFO: Pod name wrapped-volume-race-a6fe3c80-91a1-11e9-bbf5-0e74dabf3615: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-a6fe3c80-91a1-11e9-bbf5-0e74dabf3615 in namespace e2e-tests-emptydir-wrapper-k9q29, will wait for the garbage collector to delete the pods
+Jun 18 08:21:04.643: INFO: Deleting ReplicationController wrapped-volume-race-a6fe3c80-91a1-11e9-bbf5-0e74dabf3615 took: 6.412831ms
+Jun 18 08:21:05.649: INFO: Terminating ReplicationController wrapped-volume-race-a6fe3c80-91a1-11e9-bbf5-0e74dabf3615 pods took: 1.005807481s
+STEP: Creating RC which spawns configmap-volume pods
+Jun 18 08:21:52.863: INFO: Pod name wrapped-volume-race-208f3d2e-91a2-11e9-bbf5-0e74dabf3615: Found 0 pods out of 5
+Jun 18 08:21:57.868: INFO: Pod name wrapped-volume-race-208f3d2e-91a2-11e9-bbf5-0e74dabf3615: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-208f3d2e-91a2-11e9-bbf5-0e74dabf3615 in namespace e2e-tests-emptydir-wrapper-k9q29, will wait for the garbage collector to delete the pods
+Jun 18 08:24:11.967: INFO: Deleting ReplicationController wrapped-volume-race-208f3d2e-91a2-11e9-bbf5-0e74dabf3615 took: 5.100789ms
+Jun 18 08:24:12.567: INFO: Terminating ReplicationController wrapped-volume-race-208f3d2e-91a2-11e9-bbf5-0e74dabf3615 pods took: 600.298722ms
+STEP: Cleaning up the configMaps
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:25:11.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-wrapper-k9q29" for this suite.
+Jun 18 08:25:21.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:25:21.800: INFO: namespace: e2e-tests-emptydir-wrapper-k9q29, resource: bindings, ignored listing per whitelist
+Jun 18 08:25:21.980: INFO: namespace e2e-tests-emptydir-wrapper-k9q29 deletion completed in 10.339565097s
+
+• [SLOW TEST:576.391 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+ should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume
+ should provide container's memory limit [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:25:21.980: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename downward-api
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-downward-api-7hmlg
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 18 08:25:22.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-downward-api-7hmlg" to be "success or failure"
+Jun 18 08:25:22.665: INFO: Pod "downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396111ms
+Jun 18 08:25:24.670: INFO: Pod "downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007477968s
+Jun 18 08:25:26.675: INFO: Pod "downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012255303s
+STEP: Saw pod success
+Jun 18 08:25:26.675: INFO: Pod "downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:25:26.678: INFO: Trying to get logs from node node5 pod downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615 container client-container:
+STEP: delete the pod
+Jun 18 08:25:26.694: INFO: Waiting for pod downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:25:26.695: INFO: Pod downwardapi-volume-9d9d6290-91a2-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:25:26.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-7hmlg" for this suite.
+Jun 18 08:25:35.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:25:35.569: INFO: namespace: e2e-tests-downward-api-7hmlg, resource: bindings, ignored listing per whitelist
+Jun 18 08:25:36.524: INFO: namespace e2e-tests-downward-api-7hmlg deletion completed in 9.825513427s
+
+• [SLOW TEST:14.543 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+ should provide container's memory limit [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret
+ should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:25:36.524: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-rmhqp
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-a60116f3-91a2-11e9-bbf5-0e74dabf3615
+STEP: Creating a pod to test consume secrets
+Jun 18 08:25:36.741: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-projected-rmhqp" to be "success or failure"
+Jun 18 08:25:36.744: INFO: Pod "pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806115ms
+Jun 18 08:25:38.746: INFO: Pod "pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005090453s
+Jun 18 08:25:40.749: INFO: Pod "pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007667552s
+STEP: Saw pod success
+Jun 18 08:25:40.749: INFO: Pod "pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:25:40.751: INFO: Trying to get logs from node node5 pod pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615 container projected-secret-volume-test:
+STEP: delete the pod
+Jun 18 08:25:40.771: INFO: Waiting for pod pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:25:40.777: INFO: Pod pod-projected-secrets-a6018609-91a2-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Projected secret
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:25:40.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-rmhqp" for this suite.
+Jun 18 08:25:51.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:25:51.569: INFO: namespace: e2e-tests-projected-rmhqp, resource: bindings, ignored listing per whitelist
+Jun 18 08:25:52.516: INFO: namespace e2e-tests-projected-rmhqp deletion completed in 11.733463413s
+
+• [SLOW TEST:15.992 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+ should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[k8s.io] Pods
+ should support remote command execution over websockets [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:25:52.516: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename pods
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-pods-zfc64
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should support remote command execution over websockets [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 18 08:25:53.510: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:25:59.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-zfc64" for this suite.
+Jun 18 08:26:43.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:26:44.560: INFO: namespace: e2e-tests-pods-zfc64, resource: bindings, ignored listing per whitelist
+Jun 18 08:26:44.691: INFO: namespace e2e-tests-pods-zfc64 deletion completed in 45.163851506s
+
+• [SLOW TEST:52.175 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+ should support remote command execution over websockets [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container
+ should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:26:44.691: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename container-probe
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-container-probe-5tp54
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5tp54
+Jun 18 08:26:49.694: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5tp54
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 18 08:26:49.698: INFO: Initial restart count of pod liveness-http is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:30:51.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-5tp54" for this suite.
+Jun 18 08:30:59.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:30:59.631: INFO: namespace: e2e-tests-container-probe-5tp54, resource: bindings, ignored listing per whitelist
+Jun 18 08:31:00.575: INFO: namespace e2e-tests-container-probe-5tp54 deletion completed in 9.007439236s
+
+• [SLOW TEST:255.884 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+ should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret
+ should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:31:00.575: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-q2tlf
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-map-6844418b-91a3-11e9-bbf5-0e74dabf3615
+STEP: Creating a pod to test consume secrets
+Jun 18 08:31:02.671: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-projected-q2tlf" to be "success or failure"
+Jun 18 08:31:02.679: INFO: Pod "pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152342ms
+Jun 18 08:31:04.682: INFO: Pod "pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011386029s
+Jun 18 08:31:07.594: INFO: Pod "pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.923076645s
+STEP: Saw pod success
+Jun 18 08:31:07.594: INFO: Pod "pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:31:07.603: INFO: Trying to get logs from node node5 pod pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615 container projected-secret-volume-test:
+STEP: delete the pod
+Jun 18 08:31:07.626: INFO: Waiting for pod pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:31:07.628: INFO: Pod pod-projected-secrets-68468b7d-91a3-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Projected secret
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:31:07.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-q2tlf" for this suite.
+Jun 18 08:31:17.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:31:19.612: INFO: namespace: e2e-tests-projected-q2tlf, resource: bindings, ignored listing per whitelist
+Jun 18 08:31:20.595: INFO: namespace e2e-tests-projected-q2tlf deletion completed in 12.959895093s
+
+• [SLOW TEST:20.020 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+ should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI
+ should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:31:20.595: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename projected
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-projected-n895h
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 18 08:31:21.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-projected-n895h" to be "success or failure"
+Jun 18 08:31:21.652: INFO: Pod "downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363958ms
+Jun 18 08:31:24.567: INFO: Pod "downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923456008s
+Jun 18 08:31:26.572: INFO: Pod "downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.928525539s
+STEP: Saw pod success
+Jun 18 08:31:26.572: INFO: Pod "downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:31:26.576: INFO: Trying to get logs from node node5 pod downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615 container client-container:
+STEP: delete the pod
+Jun 18 08:31:26.598: INFO: Waiting for pod downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615 to disappear
+Jun 18 08:31:26.600: INFO: Pod downwardapi-volume-73959401-91a3-11e9-bbf5-0e74dabf3615 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 18 08:31:26.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-n895h" for this suite.
+Jun 18 08:31:36.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 18 08:31:37.519: INFO: namespace: e2e-tests-projected-n895h, resource: bindings, ignored listing per whitelist
+Jun 18 08:31:37.543: INFO: namespace e2e-tests-projected-n895h deletion completed in 10.938451159s
+
+• [SLOW TEST:16.948 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+ should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets
+ should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 18 08:31:37.543: INFO: >>> kubeConfig: /tmp/kubeconfig-656024001
+STEP: Building a namespace api object, basename secrets
+STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-tests-secrets-cpdkm
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+ /workspace/anago-v1.13.5-beta.0.54+2166946f41b36d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-7e525e31-91a3-11e9-bbf5-0e74dabf3615
+STEP: Creating a pod to test consume secrets
+Jun 18 08:31:40.532: INFO: Waiting up to 5m0s for pod "pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615" in namespace "e2e-tests-secrets-cpdkm" to be "success or failure"
+Jun 18 08:31:40.539: INFO: Pod "pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 6.89804ms
+Jun 18 08:31:42.542: INFO: Pod "pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009706389s
+Jun 18 08:31:44.553: INFO: Pod "pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020223526s
+STEP: Saw pod success
+Jun 18 08:31:44.553: INFO: Pod "pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615" satisfied condition "success or failure"
+Jun 18 08:31:44.557: INFO: Trying to get logs from node node5 pod pod-secrets-7e52ebfd-91a3-11e9-bbf5-0e74dabf3615 container secret-volume-test: