diff --git a/v1.14/ovh/201906061539_sonobuoy_0f7ddfd4-9f8d-416b-a20d-5af2742ea35c.tar.gz b/v1.14/ovh/201906061539_sonobuoy_0f7ddfd4-9f8d-416b-a20d-5af2742ea35c.tar.gz new file mode 100644 index 0000000000..ab0f04b868 Binary files /dev/null and b/v1.14/ovh/201906061539_sonobuoy_0f7ddfd4-9f8d-416b-a20d-5af2742ea35c.tar.gz differ diff --git a/v1.14/ovh/PRODUCT.yaml b/v1.14/ovh/PRODUCT.yaml new file mode 100644 index 0000000000..30cc691de6 --- /dev/null +++ b/v1.14/ovh/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: OVH +name: OVH Managed Kubernetes Service +version: 1.0 +website_url: https://www.ovh.ie/kubernetes/ +documentation_url: https://docs.ovh.com/gb/en/kubernetes/ +product_logo_url: https://www.ovh.com/fr/images/logo/logo-ovh-long-baseline-blue.ai +type: hosted platform +description: 'Benefit from free HA managed Kubernetes service, by hosting your nodes and services on OVH Public Cloud' \ No newline at end of file diff --git a/v1.14/ovh/README.md b/v1.14/ovh/README.md new file mode 100644 index 0000000000..f04cbb2c56 --- /dev/null +++ b/v1.14/ovh/README.md @@ -0,0 +1,63 @@ +# How to reproduce + +## 1. Create an account +Create a European OVH Account on [http://www.ovh.ie/auth/signup/#/?ovhCompany=ovh&ovhSubsidiary=IE](http://www.ovh.ie/auth/signup/#/?ovhCompany=ovh&ovhSubsidiary=IE). + +## 2. Order a new free cluster +Click on the "Get started for free" on [https://www.ovh.ie/kubernetes/](https://www.ovh.ie/kubernetes/). +If you still don't have a Public Cloud project, you will need to create one first. +Once you created/chose your project, on the left menu, under the "Orchestration / Industrialization" category, click on "Managed Kubernetes Service". +Then click on "Create a cluster" and choose the appropriate version. + +## 3. Wait +Wait approximately 2 minutes, once your cluster is ready it should redirect you to your Kubernetes cluster list. + +## 4. Get credentials and add some nodes to your cluster +From the previous UI, click on the cluster you juste created and download the kubeconfig file from the bottom of the 'Service' tab. +Then add some nodes from the 'Nodes' tab. We personnaly tested with 2 'B2-7' instances. + +## 5. Run the tests +Download a [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI, or build it yourself by running: + +``` +$ go get -u -v github.com/heptio/sonobuoy +``` + +Deploy a Sonobuoy pod to your cluster with: + +``` +$ sonobuoy run +``` + +View actively running pods: + +``` +$ sonobuoy status +``` + +To inspect the logs: + +``` +$ sonobuoy logs +``` + +Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to +a local directory: + +``` +$ sonobuoy retrieve . +``` + +This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local `.` directory. Extract the contents into `./results` with: + +``` +mkdir ./results; tar xzf *.tar.gz -C ./results +``` + +To clean up Kubernetes objects created by Sonobuoy, run: + +``` +sonobuoy delete +``` + +Have fun testing ! \ No newline at end of file diff --git a/v1.14/ovh/e2e.log b/v1.14/ovh/e2e.log new file mode 100644 index 0000000000..8e4638e00b --- /dev/null +++ b/v1.14/ovh/e2e.log @@ -0,0 +1,10752 @@ +I0606 15:39:31.756261 15 test_context.go:405] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-489975799 +I0606 15:39:31.756507 15 e2e.go:240] Starting e2e run "46805cbb-8871-11e9-b3bf-0e7bbe1a64f6" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1559835570 - Will randomize all specs +Will run 204 of 3585 specs + +Jun 6 15:39:31.920: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 15:39:31.922: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jun 6 15:39:32.001: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jun 6 15:39:32.063: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jun 6 15:39:32.063: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. +Jun 6 15:39:32.063: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jun 6 15:39:32.076: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'canal' (0 seconds elapsed) +Jun 6 15:39:32.076: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Jun 6 15:39:32.076: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'wormhole' (0 seconds elapsed) +Jun 6 15:39:32.076: INFO: e2e test version: v1.14.2 +Jun 6 15:39:32.079: INFO: kube-apiserver version: v1.14.2 +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:39:32.079: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename deployment +Jun 6 15:39:32.188: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 15:39:32.219: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jun 6 15:39:37.229: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 6 15:39:37.229: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 6 15:39:37.285: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2589,SelfLink:/apis/apps/v1/namespaces/deployment-2589/deployments/test-cleanup-deployment,UID:4a66dc2b-8871-11e9-9995-4ad9032ea524,ResourceVersion:3958918791,Generation:1,CreationTimestamp:2019-06-06 15:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 6 15:39:37.300: INFO: New ReplicaSet "test-cleanup-deployment-55cbfbc8f5" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55cbfbc8f5,GenerateName:,Namespace:deployment-2589,SelfLink:/apis/apps/v1/namespaces/deployment-2589/replicasets/test-cleanup-deployment-55cbfbc8f5,UID:4a6c7c38-8871-11e9-9995-4ad9032ea524,ResourceVersion:3958918854,Generation:1,CreationTimestamp:2019-06-06 15:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4a66dc2b-8871-11e9-9995-4ad9032ea524 0xc001c26f47 0xc001c26f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 15:39:37.300: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jun 6 15:39:37.300: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2589,SelfLink:/apis/apps/v1/namespaces/deployment-2589/replicasets/test-cleanup-controller,UID:4764ae46-8871-11e9-9995-4ad9032ea524,ResourceVersion:3958918824,Generation:1,CreationTimestamp:2019-06-06 15:39:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4a66dc2b-8871-11e9-9995-4ad9032ea524 0xc001c26e77 0xc001c26e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 6 15:39:37.313: INFO: Pod "test-cleanup-controller-fwqvq" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-fwqvq,GenerateName:test-cleanup-controller-,Namespace:deployment-2589,SelfLink:/api/v1/namespaces/deployment-2589/pods/test-cleanup-controller-fwqvq,UID:4767ba58-8871-11e9-9995-4ad9032ea524,ResourceVersion:3958918722,Generation:0,CreationTimestamp:2019-06-06 15:39:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.29/32,},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 4764ae46-8871-11e9-9995-4ad9032ea524 0xc001c278f7 0xc001c278f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7bjxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7bjxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7bjxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c27960} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c27980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:39:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:39:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:39:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:39:32 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.29,StartTime:2019-06-06 15:39:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 15:39:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7825502b90c184a3b3e800988f3f3e22f13606a35b1d8100d8bf860dc3ebc076}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 15:39:37.314: INFO: Pod "test-cleanup-deployment-55cbfbc8f5-fbjpp" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55cbfbc8f5-fbjpp,GenerateName:test-cleanup-deployment-55cbfbc8f5-,Namespace:deployment-2589,SelfLink:/api/v1/namespaces/deployment-2589/pods/test-cleanup-deployment-55cbfbc8f5-fbjpp,UID:4a6e5a71-8871-11e9-9995-4ad9032ea524,ResourceVersion:3958918857,Generation:0,CreationTimestamp:2019-06-06 15:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55cbfbc8f5 4a6c7c38-8871-11e9-9995-4ad9032ea524 0xc001c27a67 0xc001c27a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7bjxc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7bjxc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-7bjxc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c27ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c27ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:39:37.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2589" for this suite. +Jun 6 15:39:43.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:39:43.604: INFO: namespace deployment-2589 deletion completed in 6.278168284s + +• [SLOW TEST:11.525 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:39:43.604: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 15:39:43.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-3028" to be "success or failure" +Jun 6 15:39:43.752: INFO: Pod "downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.797738ms +Jun 6 15:39:45.762: INFO: Pod "downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023275928s +Jun 6 15:39:47.770: INFO: Pod "downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032103449s +STEP: Saw pod success +Jun 6 15:39:47.771: INFO: Pod "downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:39:47.777: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 15:39:47.823: INFO: Waiting for pod downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:39:47.830: INFO: Pod downwardapi-volume-4e42ba34-8871-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:39:47.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3028" for this suite. +Jun 6 15:39:53.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:39:54.184: INFO: namespace projected-3028 deletion completed in 6.340864679s + +• [SLOW TEST:10.580 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:39:54.185: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:39:58.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-6973" for this suite. +Jun 6 15:40:42.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:40:42.650: INFO: namespace kubelet-test-6973 deletion completed in 44.294060339s + +• [SLOW TEST:48.465 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a read only busybox container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:40:42.651: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 15:40:42.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-4042" to be "success or failure" +Jun 6 15:40:42.772: INFO: Pod "downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.944682ms +Jun 6 15:40:44.780: INFO: Pod "downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014074577s +Jun 6 15:40:46.789: INFO: Pod "downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023069218s +STEP: Saw pod success +Jun 6 15:40:46.789: INFO: Pod "downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:40:46.802: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 15:40:46.848: INFO: Waiting for pod downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:40:46.854: INFO: Pod downwardapi-volume-717253ce-8871-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:40:46.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4042" for this suite. +Jun 6 15:40:52.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:40:53.154: INFO: namespace projected-4042 deletion completed in 6.292943814s + +• [SLOW TEST:10.503 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:40:53.154: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 6 15:40:53.259: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:41:00.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1903" for this suite. +Jun 6 15:41:20.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:41:20.763: INFO: namespace init-container-1903 deletion completed in 20.286247459s + +• [SLOW TEST:27.609 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:41:20.764: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 6 15:41:20.891: INFO: Waiting up to 5m0s for pod "downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-1734" to be "success or failure" +Jun 6 15:41:20.898: INFO: Pod "downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444434ms +Jun 6 15:41:22.906: INFO: Pod "downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014694533s +Jun 6 15:41:24.914: INFO: Pod "downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022980983s +STEP: Saw pod success +Jun 6 15:41:24.914: INFO: Pod "downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:41:24.921: INFO: Trying to get logs from node cncf-2 pod downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 15:41:24.957: INFO: Waiting for pod downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:41:24.962: INFO: Pod downward-api-882b0d86-8871-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:41:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1734" for this suite. +Jun 6 15:41:30.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:41:31.236: INFO: namespace downward-api-1734 deletion completed in 6.266524069s + +• [SLOW TEST:10.472 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:41:31.236: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:41:35.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-355" for this suite. +Jun 6 15:41:41.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:41:41.703: INFO: namespace kubelet-test-355 deletion completed in 6.296526289s + +• [SLOW TEST:10.467 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Runtime + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:41:41.704: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [k8s.io] Container Runtime + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:42:13.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-925" for this suite. +Jun 6 15:42:19.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:42:20.006: INFO: namespace container-runtime-925 deletion completed in 6.284864542s + +• [SLOW TEST:38.302 seconds] +[k8s.io] Container Runtime +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + blackbox test + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 + when starting a container that exits + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 + should run with the expected status [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:42:20.007: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-ab7caa08-8871-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 15:42:20.159: INFO: Waiting up to 5m0s for pod "pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-2287" to be "success or failure" +Jun 6 15:42:20.169: INFO: Pod "pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.284011ms +Jun 6 15:42:22.177: INFO: Pod "pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017511365s +Jun 6 15:42:24.187: INFO: Pod "pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027692182s +STEP: Saw pod success +Jun 6 15:42:24.187: INFO: Pod "pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:42:24.195: INFO: Trying to get logs from node cncf-2 pod pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 15:42:24.231: INFO: Waiting for pod pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:42:24.241: INFO: Pod pod-secrets-ab7eb55a-8871-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:42:24.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2287" for this suite. +Jun 6 15:42:32.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:42:32.526: INFO: namespace secrets-2287 deletion completed in 8.27531984s + +• [SLOW TEST:12.520 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:42:32.527: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +W0606 15:43:12.894960 15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 6 15:43:12.895: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:43:12.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4028" for this suite. +Jun 6 15:43:18.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:43:19.211: INFO: namespace gc-4028 deletion completed in 6.291549886s + +• [SLOW TEST:46.684 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:43:19.211: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:43:25.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-2379" for this suite. +Jun 6 15:43:31.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:43:32.235: INFO: namespace namespaces-2379 deletion completed in 6.301870053s +STEP: Destroying namespace "nsdeletetest-6232" for this suite. +Jun 6 15:43:32.242: INFO: Namespace nsdeletetest-6232 was already deleted +STEP: Destroying namespace "nsdeletetest-1018" for this suite. +Jun 6 15:43:38.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:43:38.510: INFO: namespace nsdeletetest-1018 deletion completed in 6.267650872s + +• [SLOW TEST:19.299 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:43:38.512: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-secret-xwg7 +STEP: Creating a pod to test atomic-volume-subpath +Jun 6 15:43:38.652: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xwg7" in namespace "subpath-7376" to be "success or failure" +Jun 6 15:43:38.659: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.497841ms +Jun 6 15:43:40.676: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024442471s +Jun 6 15:43:42.704: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 4.051776891s +Jun 6 15:43:44.714: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 6.06247917s +Jun 6 15:43:46.724: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 8.072000523s +Jun 6 15:43:48.735: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 10.083139993s +Jun 6 15:43:50.952: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 12.300019399s +Jun 6 15:43:52.961: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 14.309450705s +Jun 6 15:43:54.971: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 16.318622551s +Jun 6 15:43:56.980: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 18.328069508s +Jun 6 15:43:58.989: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 20.337162033s +Jun 6 15:44:00.999: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Running", Reason="", readiness=true. Elapsed: 22.346714838s +Jun 6 15:44:03.007: INFO: Pod "pod-subpath-test-secret-xwg7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.354835105s +STEP: Saw pod success +Jun 6 15:44:03.007: INFO: Pod "pod-subpath-test-secret-xwg7" satisfied condition "success or failure" +Jun 6 15:44:03.012: INFO: Trying to get logs from node cncf-1 pod pod-subpath-test-secret-xwg7 container test-container-subpath-secret-xwg7: +STEP: delete the pod +Jun 6 15:44:03.071: INFO: Waiting for pod pod-subpath-test-secret-xwg7 to disappear +Jun 6 15:44:03.084: INFO: Pod pod-subpath-test-secret-xwg7 no longer exists +STEP: Deleting pod pod-subpath-test-secret-xwg7 +Jun 6 15:44:03.084: INFO: Deleting pod "pod-subpath-test-secret-xwg7" in namespace "subpath-7376" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:44:03.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7376" for this suite. +Jun 6 15:44:09.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:44:09.372: INFO: namespace subpath-7376 deletion completed in 6.274328797s + +• [SLOW TEST:30.860 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:44:09.373: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 6 15:44:09.482: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 6 15:44:09.500: INFO: Waiting for terminating namespaces to be deleted... +Jun 6 15:44:09.507: INFO: +Logging pods the kubelet thinks is on node cncf-1 before test +Jun 6 15:44:09.530: INFO: wormhole-fr2gz from kube-system started at 2019-06-06 09:17:01 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.530: INFO: Container wormhole ready: true, restart count 0 +Jun 6 15:44:09.530: INFO: sonobuoy-e2e-job-f3c5b85dde7b4d05 from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 15:44:09.530: INFO: Container e2e ready: true, restart count 0 +Jun 6 15:44:09.530: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 15:44:09.530: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-06 15:39:26 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.531: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: canal-xbh5x from kube-system started at 2019-06-06 09:16:42 +0000 UTC (2 container statuses recorded) +Jun 6 15:44:09.531: INFO: Container calico-node ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 15:44:09.531: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-nczqr from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 15:44:09.531: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: kube-proxy-v8kjv from kube-system started at 2019-06-06 09:16:42 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.531: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: kube-dns-868d878686-p5pfd from kube-system started at 2019-06-06 15:38:23 +0000 UTC (3 container statuses recorded) +Jun 6 15:44:09.531: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: Container kubedns ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: Container sidecar ready: true, restart count 0 +Jun 6 15:44:09.531: INFO: +Logging pods the kubelet thinks is on node cncf-2 before test +Jun 6 15:44:09.550: INFO: metrics-server-7c89fd4f7b-bnwzl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.550: INFO: Container metrics-server ready: true, restart count 0 +Jun 6 15:44:09.550: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-z92mc from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 15:44:09.550: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 15:44:09.550: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 15:44:09.550: INFO: kube-dns-autoscaler-6bfccfcbd4-cqwj2 from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.551: INFO: Container autoscaler ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: kube-dns-868d878686-vl7zl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (3 container statuses recorded) +Jun 6 15:44:09.551: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: Container kubedns ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: Container sidecar ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: kube-proxy-7cmmv from kube-system started at 2019-06-06 09:15:52 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.551: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: canal-t4msm from kube-system started at 2019-06-06 09:15:52 +0000 UTC (2 container statuses recorded) +Jun 6 15:44:09.551: INFO: Container calico-node ready: true, restart count 0 +Jun 6 15:44:09.551: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 15:44:09.551: INFO: wormhole-bsxhq from kube-system started at 2019-06-06 09:16:11 +0000 UTC (1 container statuses recorded) +Jun 6 15:44:09.551: INFO: Container wormhole ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: verifying the node has the label node cncf-1 +STEP: verifying the node has the label node cncf-2 +Jun 6 15:44:09.638: INFO: Pod sonobuoy requesting resource cpu=0m on Node cncf-1 +Jun 6 15:44:09.638: INFO: Pod sonobuoy-e2e-job-f3c5b85dde7b4d05 requesting resource cpu=0m on Node cncf-1 +Jun 6 15:44:09.638: INFO: Pod sonobuoy-systemd-logs-daemon-set-100b28c213194052-nczqr requesting resource cpu=0m on Node cncf-1 +Jun 6 15:44:09.638: INFO: Pod sonobuoy-systemd-logs-daemon-set-100b28c213194052-z92mc requesting resource cpu=0m on Node cncf-2 +Jun 6 15:44:09.639: INFO: Pod canal-t4msm requesting resource cpu=250m on Node cncf-2 +Jun 6 15:44:09.639: INFO: Pod canal-xbh5x requesting resource cpu=250m on Node cncf-1 +Jun 6 15:44:09.639: INFO: Pod kube-dns-868d878686-p5pfd requesting resource cpu=260m on Node cncf-1 +Jun 6 15:44:09.639: INFO: Pod kube-dns-868d878686-vl7zl requesting resource cpu=260m on Node cncf-2 +Jun 6 15:44:09.640: INFO: Pod kube-dns-autoscaler-6bfccfcbd4-cqwj2 requesting resource cpu=20m on Node cncf-2 +Jun 6 15:44:09.640: INFO: Pod kube-proxy-7cmmv requesting resource cpu=100m on Node cncf-2 +Jun 6 15:44:09.640: INFO: Pod kube-proxy-v8kjv requesting resource cpu=100m on Node cncf-1 +Jun 6 15:44:09.640: INFO: Pod metrics-server-7c89fd4f7b-bnwzl requesting resource cpu=0m on Node cncf-2 +Jun 6 15:44:09.640: INFO: Pod wormhole-bsxhq requesting resource cpu=0m on Node cncf-2 +Jun 6 15:44:09.640: INFO: Pod wormhole-fr2gz requesting resource cpu=0m on Node cncf-1 +STEP: Starting Pods to consume most of the cluster CPU. +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6.15a5a670e0825813], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3085/filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6 to cncf-1] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6.15a5a6711d838a63], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6.15a5a671463e9f76], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6.15a5a6714991bcf8], Reason = [Created], Message = [Created container filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6.15a5a67151ff77e1], Reason = [Started], Message = [Started container filler-pod-ecc32167-8871-11e9-b3bf-0e7bbe1a64f6] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6.15a5a670e1221eb4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3085/filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6 to cncf-2] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6.15a5a6711c7708bb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6.15a5a6713e689bab], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6.15a5a67145ef14ff], Reason = [Created], Message = [Created container filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6.15a5a6714f72d984], Reason = [Started], Message = [Started container filler-pod-ecc70186-8871-11e9-b3bf-0e7bbe1a64f6] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.15a5a671d3a6abce], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node cncf-1 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node cncf-2 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:44:14.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3085" for this suite. +Jun 6 15:44:20.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:44:21.149: INFO: namespace sched-pred-3085 deletion completed in 6.274173625s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:11.776 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:44:21.151: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 6 15:44:29.359: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 6 15:44:29.370: INFO: Pod pod-with-poststart-http-hook still exists +Jun 6 15:44:31.370: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 6 15:44:31.377: INFO: Pod pod-with-poststart-http-hook still exists +Jun 6 15:44:33.370: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 6 15:44:33.378: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:44:33.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-1908" for this suite. +Jun 6 15:44:55.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:44:55.642: INFO: namespace container-lifecycle-hook-1908 deletion completed in 22.253690835s + +• [SLOW TEST:34.491 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:44:55.642: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-7091 +Jun 6 15:44:59.780: INFO: Started pod liveness-exec in namespace container-probe-7091 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 6 15:44:59.786: INFO: Initial restart count of pod liveness-exec is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:49:00.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7091" for this suite. +Jun 6 15:49:08.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:49:08.592: INFO: namespace container-probe-7091 deletion completed in 8.276177601s + +• [SLOW TEST:252.950 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:49:08.593: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-9f0282bd-8872-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 15:49:08.722: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-8114" to be "success or failure" +Jun 6 15:49:08.729: INFO: Pod "pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.185888ms +Jun 6 15:49:10.739: INFO: Pod "pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017361693s +Jun 6 15:49:12.748: INFO: Pod "pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026032258s +STEP: Saw pod success +Jun 6 15:49:12.748: INFO: Pod "pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:49:12.756: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 15:49:12.863: INFO: Waiting for pod pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:49:12.870: INFO: Pod pod-configmaps-9f04418f-8872-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:49:12.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8114" for this suite. +Jun 6 15:49:18.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:49:19.120: INFO: namespace configmap-8114 deletion completed in 6.242358809s + +• [SLOW TEST:10.526 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:49:19.121: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: validating api versions +Jun 6 15:49:19.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 api-versions' +Jun 6 15:49:19.368: INFO: stderr: "" +Jun 6 15:49:19.368: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:49:19.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1606" for this suite. +Jun 6 15:49:25.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:49:25.631: INFO: namespace kubectl-1606 deletion completed in 6.252441126s + +• [SLOW TEST:6.510 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl api-versions + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:49:25.632: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5851.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5851.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5851.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5851.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5851.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5851.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 6 15:49:41.867: INFO: DNS probes using dns-5851/dns-test-a92ba427-8872-11e9-b3bf-0e7bbe1a64f6 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:49:41.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5851" for this suite. +Jun 6 15:49:47.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:49:48.184: INFO: namespace dns-5851 deletion completed in 6.274254188s + +• [SLOW TEST:22.552 seconds] +[sig-network] DNS +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:49:48.185: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:69 +[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Registering the sample API server. +Jun 6 15:49:48.879: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jun 6 15:49:50.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:49:53.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:49:55.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:49:57.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:49:59.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695432988, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:50:01.986: INFO: Waited 968.027149ms for the sample-apiserver to be ready to handle requests. +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:60 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:50:02.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-7156" for this suite. +Jun 6 15:50:08.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:50:08.839: INFO: namespace aggregator-7156 deletion completed in 6.341828013s + +• [SLOW TEST:20.654 seconds] +[sig-api-machinery] Aggregator +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:50:08.840: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 6 15:50:13.597: INFO: Successfully updated pod "annotationupdatec2ece3bc-8872-11e9-b3bf-0e7bbe1a64f6" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:50:15.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9409" for this suite. +Jun 6 15:50:37.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:50:37.897: INFO: namespace downward-api-9409 deletion completed in 22.259244691s + +• [SLOW TEST:29.058 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:50:37.898: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating pod +Jun 6 15:50:42.037: INFO: Pod pod-hostip-d43da19b-8872-11e9-b3bf-0e7bbe1a64f6 has hostIP: 51.68.79.184 +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:50:42.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9008" for this suite. +Jun 6 15:51:04.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:51:04.307: INFO: namespace pods-9008 deletion completed in 22.261259341s + +• [SLOW TEST:26.409 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:51:04.307: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support rollover [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 15:51:04.430: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Jun 6 15:51:09.438: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 6 15:51:09.439: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Jun 6 15:51:11.446: INFO: Creating deployment "test-rollover-deployment" +Jun 6 15:51:11.498: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Jun 6 15:51:13.511: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Jun 6 15:51:13.525: INFO: Ensure that both replica sets have 1 created replica +Jun 6 15:51:13.540: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Jun 6 15:51:13.559: INFO: Updating deployment test-rollover-deployment +Jun 6 15:51:13.559: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Jun 6 15:51:15.572: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Jun 6 15:51:15.605: INFO: Make sure deployment "test-rollover-deployment" is complete +Jun 6 15:51:15.624: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:15.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433073, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:17.639: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:17.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433076, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:19.641: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:19.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433076, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:21.638: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:21.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433076, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:23.641: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:23.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433076, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:25.639: INFO: all replica sets need to contain the pod-template-hash label +Jun 6 15:51:25.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433076, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433071, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:51:27.640: INFO: +Jun 6 15:51:27.640: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 6 15:51:27.656: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7871,SelfLink:/apis/apps/v1/namespaces/deployment-7871/deployments/test-rollover-deployment,UID:e82e69f0-8872-11e9-9995-4ad9032ea524,ResourceVersion:3959044786,Generation:2,CreationTimestamp:2019-06-06 15:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-06 15:51:11 +0000 UTC 2019-06-06 15:51:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-06 15:51:26 +0000 UTC 2019-06-06 15:51:11 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-766b4d6c9d" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 6 15:51:27.666: INFO: New ReplicaSet "test-rollover-deployment-766b4d6c9d" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d,GenerateName:,Namespace:deployment-7871,SelfLink:/apis/apps/v1/namespaces/deployment-7871/replicasets/test-rollover-deployment-766b4d6c9d,UID:e96fd803-8872-11e9-9995-4ad9032ea524,ResourceVersion:3959044776,Generation:2,CreationTimestamp:2019-06-06 15:51:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e82e69f0-8872-11e9-9995-4ad9032ea524 0xc0031d4f17 0xc0031d4f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 6 15:51:27.666: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Jun 6 15:51:27.666: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7871,SelfLink:/apis/apps/v1/namespaces/deployment-7871/replicasets/test-rollover-controller,UID:e3fbc71a-8872-11e9-9995-4ad9032ea524,ResourceVersion:3959044785,Generation:2,CreationTimestamp:2019-06-06 15:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e82e69f0-8872-11e9-9995-4ad9032ea524 0xc0031d4d67 0xc0031d4d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 15:51:27.666: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6455657675,GenerateName:,Namespace:deployment-7871,SelfLink:/apis/apps/v1/namespaces/deployment-7871/replicasets/test-rollover-deployment-6455657675,UID:e8364461-8872-11e9-9995-4ad9032ea524,ResourceVersion:3959042322,Generation:2,CreationTimestamp:2019-06-06 15:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e82e69f0-8872-11e9-9995-4ad9032ea524 0xc0031d4e37 0xc0031d4e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 15:51:27.671: INFO: Pod "test-rollover-deployment-766b4d6c9d-blwj7" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d-blwj7,GenerateName:test-rollover-deployment-766b4d6c9d-,Namespace:deployment-7871,SelfLink:/api/v1/namespaces/deployment-7871/pods/test-rollover-deployment-766b4d6c9d-blwj7,UID:e97542ca-8872-11e9-9995-4ad9032ea524,ResourceVersion:3959042888,Generation:0,CreationTimestamp:2019-06-06 15:51:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.95/32,},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-766b4d6c9d e96fd803-8872-11e9-9995-4ad9032ea524 0xc002d2a067 0xc002d2a068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cvgtn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cvgtn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cvgtn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d2a0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d2a0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:51:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:51:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:51:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:51:13 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:10.2.0.95,StartTime:2019-06-06 15:51:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-06 15:51:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0b487d4b1ff44426ac4135a7cde0d3372d8ff759931ddf7e6b7d2ebae345013d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:51:27.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7871" for this suite. +Jun 6 15:51:33.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:51:33.948: INFO: namespace deployment-7871 deletion completed in 6.270771358s + +• [SLOW TEST:29.641 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support rollover [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:51:33.949: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-f5a698bd-8872-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 15:51:34.076: INFO: Waiting up to 5m0s for pod "pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-6507" to be "success or failure" +Jun 6 15:51:34.083: INFO: Pod "pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.000384ms +Jun 6 15:51:36.428: INFO: Pod "pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351565348s +Jun 6 15:51:38.438: INFO: Pod "pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.361323361s +STEP: Saw pod success +Jun 6 15:51:38.438: INFO: Pod "pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:51:38.448: INFO: Trying to get logs from node cncf-1 pod pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 15:51:38.490: INFO: Waiting for pod pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:51:38.495: INFO: Pod pod-secrets-f5a8113f-8872-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:51:38.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6507" for this suite. +Jun 6 15:51:44.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:51:44.790: INFO: namespace secrets-6507 deletion completed in 6.286803645s + +• [SLOW TEST:10.841 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:51:44.791: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:52:11.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-4541" for this suite. +Jun 6 15:52:17.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:52:17.453: INFO: namespace namespaces-4541 deletion completed in 6.275380872s +STEP: Destroying namespace "nsdeletetest-8041" for this suite. +Jun 6 15:52:17.460: INFO: Namespace nsdeletetest-8041 was already deleted +STEP: Destroying namespace "nsdeletetest-5126" for this suite. +Jun 6 15:52:23.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:52:23.724: INFO: namespace nsdeletetest-5126 deletion completed in 6.264021621s + +• [SLOW TEST:38.933 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:52:23.725: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +W0606 15:52:54.407426 15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 6 15:52:54.407: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:52:54.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1149" for this suite. +Jun 6 15:53:00.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:53:00.672: INFO: namespace gc-1149 deletion completed in 6.258070632s + +• [SLOW TEST:36.948 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:53:00.673: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-mwttg in namespace proxy-2568 +I0606 15:53:00.814503 15 runners.go:184] Created replication controller with name: proxy-service-mwttg, namespace: proxy-2568, replica count: 1 +I0606 15:53:01.866605 15 runners.go:184] proxy-service-mwttg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0606 15:53:02.866863 15 runners.go:184] proxy-service-mwttg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0606 15:53:03.867132 15 runners.go:184] proxy-service-mwttg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0606 15:53:04.867470 15 runners.go:184] proxy-service-mwttg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0606 15:53:05.867900 15 runners.go:184] proxy-service-mwttg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 6 15:53:05.876: INFO: setup took 5.095942922s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Jun 6 15:53:05.899: INFO: (0) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 21.79317ms) +Jun 6 15:53:05.899: INFO: (0) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 22.709939ms) +Jun 6 15:53:05.906: INFO: (0) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 28.456214ms) +Jun 6 15:53:05.911: INFO: (0) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 32.812749ms) +Jun 6 15:53:05.913: INFO: (0) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 36.468752ms) +Jun 6 15:53:05.913: INFO: (0) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 36.456714ms) +Jun 6 15:53:05.917: INFO: (0) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 39.88542ms) +Jun 6 15:53:05.917: INFO: (0) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 39.308062ms) +Jun 6 15:53:05.918: INFO: (0) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 40.023443ms) +Jun 6 15:53:05.919: INFO: (0) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 40.772806ms) +Jun 6 15:53:05.921: INFO: (0) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 42.981894ms) +Jun 6 15:53:05.925: INFO: (0) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 47.922624ms) +Jun 6 15:53:05.926: INFO: (0) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 25.440089ms) +Jun 6 15:53:05.957: INFO: (1) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 27.130633ms) +Jun 6 15:53:05.957: INFO: (1) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 27.779704ms) +Jun 6 15:53:05.957: INFO: (1) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 27.424604ms) +Jun 6 15:53:05.957: INFO: (1) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 27.05669ms) +Jun 6 15:53:05.957: INFO: (1) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 27.01953ms) +Jun 6 15:53:05.958: INFO: (1) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 27.818053ms) +Jun 6 15:53:05.958: INFO: (1) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 27.745287ms) +Jun 6 15:53:05.959: INFO: (1) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 28.592029ms) +Jun 6 15:53:05.959: INFO: (1) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 28.996308ms) +Jun 6 15:53:05.959: INFO: (1) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 29.166449ms) +Jun 6 15:53:05.960: INFO: (1) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 29.836563ms) +Jun 6 15:53:05.960: INFO: (1) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 30.913507ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 10.487268ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 10.32224ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 11.250465ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 9.182643ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 10.278202ms) +Jun 6 15:53:05.972: INFO: (2) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 9.429166ms) +Jun 6 15:53:05.976: INFO: (2) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 13.520756ms) +Jun 6 15:53:05.976: INFO: (2) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 14.380201ms) +Jun 6 15:53:05.977: INFO: (2) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 13.542023ms) +Jun 6 15:53:05.977: INFO: (2) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 15.940138ms) +Jun 6 15:53:05.977: INFO: (2) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 14.83404ms) +Jun 6 15:53:05.977: INFO: (2) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: ... (200; 14.983863ms) +Jun 6 15:53:05.990: INFO: (3) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 12.016737ms) +Jun 6 15:53:05.990: INFO: (3) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 11.721268ms) +Jun 6 15:53:05.991: INFO: (3) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 11.903687ms) +Jun 6 15:53:05.991: INFO: (3) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 12.910888ms) +Jun 6 15:53:05.992: INFO: (3) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 13.480206ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 17.658893ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 17.57182ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 17.686942ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 17.253749ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 18.098564ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 17.39989ms) +Jun 6 15:53:05.996: INFO: (3) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: ... (200; 18.715215ms) +Jun 6 15:53:05.997: INFO: (3) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 18.24192ms) +Jun 6 15:53:06.007: INFO: (4) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 9.8527ms) +Jun 6 15:53:06.007: INFO: (4) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 13.156508ms) +Jun 6 15:53:06.011: INFO: (4) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 13.006436ms) +Jun 6 15:53:06.011: INFO: (4) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 13.166444ms) +Jun 6 15:53:06.012: INFO: (4) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 14.724236ms) +Jun 6 15:53:06.013: INFO: (4) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 15.217145ms) +Jun 6 15:53:06.013: INFO: (4) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 14.641781ms) +Jun 6 15:53:06.013: INFO: (4) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 14.710908ms) +Jun 6 15:53:06.013: INFO: (4) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 14.913696ms) +Jun 6 15:53:06.014: INFO: (4) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 16.822045ms) +Jun 6 15:53:06.014: INFO: (4) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 17.163453ms) +Jun 6 15:53:06.017: INFO: (4) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 19.244123ms) +Jun 6 15:53:06.024: INFO: (5) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 6.862543ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 7.054912ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 7.361386ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 7.671713ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 8.232087ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 7.655093ms) +Jun 6 15:53:06.025: INFO: (5) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 7.814272ms) +Jun 6 15:53:06.026: INFO: (5) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: ... (200; 10.904487ms) +Jun 6 15:53:06.040: INFO: (6) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 11.379801ms) +Jun 6 15:53:06.040: INFO: (6) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 11.946109ms) +Jun 6 15:53:06.040: INFO: (6) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 11.486033ms) +Jun 6 15:53:06.040: INFO: (6) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 11.92019ms) +Jun 6 15:53:06.040: INFO: (6) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 12.311713ms) +Jun 6 15:53:06.043: INFO: (6) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 8.805193ms) +Jun 6 15:53:06.057: INFO: (7) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 8.711413ms) +Jun 6 15:53:06.057: INFO: (7) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 9.486363ms) +Jun 6 15:53:06.058: INFO: (7) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 9.716452ms) +Jun 6 15:53:06.059: INFO: (7) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 9.826365ms) +Jun 6 15:53:06.060: INFO: (7) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 12.16846ms) +Jun 6 15:53:06.060: INFO: (7) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 12.209989ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 13.364801ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 14.078879ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 13.781922ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 13.670635ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 13.561417ms) +Jun 6 15:53:06.062: INFO: (7) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 13.928929ms) +Jun 6 15:53:06.071: INFO: (8) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 8.355983ms) +Jun 6 15:53:06.071: INFO: (8) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 8.446565ms) +Jun 6 15:53:06.071: INFO: (8) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 8.650125ms) +Jun 6 15:53:06.071: INFO: (8) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 8.736482ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 8.863121ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 9.300992ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 9.112268ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 9.280918ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 9.828688ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 8.801366ms) +Jun 6 15:53:06.072: INFO: (8) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 12.224918ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: ... (200; 13.673468ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 14.034513ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 14.180339ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 13.831914ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 14.387205ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 14.046809ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 14.753523ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 14.829111ms) +Jun 6 15:53:06.090: INFO: (9) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 14.487247ms) +Jun 6 15:53:06.098: INFO: (9) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 22.498779ms) +Jun 6 15:53:06.105: INFO: (10) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 7.227304ms) +Jun 6 15:53:06.105: INFO: (10) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 7.692788ms) +Jun 6 15:53:06.106: INFO: (10) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 8.244306ms) +Jun 6 15:53:06.106: INFO: (10) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 8.174812ms) +Jun 6 15:53:06.106: INFO: (10) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 8.053506ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 9.524662ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 9.865111ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 9.253208ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 9.174529ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 9.847861ms) +Jun 6 15:53:06.108: INFO: (10) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 9.392385ms) +Jun 6 15:53:06.109: INFO: (10) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 10.819808ms) +Jun 6 15:53:06.110: INFO: (10) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 12.0534ms) +Jun 6 15:53:06.111: INFO: (10) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 12.723518ms) +Jun 6 15:53:06.111: INFO: (10) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 12.703487ms) +Jun 6 15:53:06.118: INFO: (11) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 7.007827ms) +Jun 6 15:53:06.118: INFO: (11) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 7.189387ms) +Jun 6 15:53:06.119: INFO: (11) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 7.60799ms) +Jun 6 15:53:06.120: INFO: (11) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 8.626283ms) +Jun 6 15:53:06.120: INFO: (11) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 8.847524ms) +Jun 6 15:53:06.120: INFO: (11) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 9.00353ms) +Jun 6 15:53:06.120: INFO: (11) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 8.816783ms) +Jun 6 15:53:06.121: INFO: (11) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 9.746438ms) +Jun 6 15:53:06.124: INFO: (11) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 12.619712ms) +Jun 6 15:53:06.124: INFO: (11) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 13.424988ms) +Jun 6 15:53:06.124: INFO: (11) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 13.101314ms) +Jun 6 15:53:06.125: INFO: (11) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 13.531036ms) +Jun 6 15:53:06.125: INFO: (11) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 13.361715ms) +Jun 6 15:53:06.125: INFO: (11) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 13.851897ms) +Jun 6 15:53:06.128: INFO: (11) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 16.368857ms) +Jun 6 15:53:06.141: INFO: (12) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 13.258222ms) +Jun 6 15:53:06.141: INFO: (12) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 13.556592ms) +Jun 6 15:53:06.141: INFO: (12) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 13.05184ms) +Jun 6 15:53:06.142: INFO: (12) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 13.095053ms) +Jun 6 15:53:06.142: INFO: (12) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 13.560225ms) +Jun 6 15:53:06.142: INFO: (12) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 13.646349ms) +Jun 6 15:53:06.142: INFO: (12) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 13.933753ms) +Jun 6 15:53:06.142: INFO: (12) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 14.250027ms) +Jun 6 15:53:06.143: INFO: (12) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 14.394133ms) +Jun 6 15:53:06.143: INFO: (12) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 14.764248ms) +Jun 6 15:53:06.144: INFO: (12) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 15.563234ms) +Jun 6 15:53:06.144: INFO: (12) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 15.695206ms) +Jun 6 15:53:06.144: INFO: (12) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 15.419047ms) +Jun 6 15:53:06.144: INFO: (12) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 16.204196ms) +Jun 6 15:53:06.145: INFO: (12) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 16.58656ms) +Jun 6 15:53:06.165: INFO: (13) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 19.665245ms) +Jun 6 15:53:06.165: INFO: (13) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 19.502531ms) +Jun 6 15:53:06.165: INFO: (13) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 19.82615ms) +Jun 6 15:53:06.165: INFO: (13) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 19.803682ms) +Jun 6 15:53:06.167: INFO: (13) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 21.891281ms) +Jun 6 15:53:06.168: INFO: (13) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 22.843444ms) +Jun 6 15:53:06.168: INFO: (13) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 22.851734ms) +Jun 6 15:53:06.168: INFO: (13) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 22.940962ms) +Jun 6 15:53:06.168: INFO: (13) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 22.981924ms) +Jun 6 15:53:06.168: INFO: (13) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 24.48765ms) +Jun 6 15:53:06.169: INFO: (13) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 24.498412ms) +Jun 6 15:53:06.182: INFO: (13) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 36.646078ms) +Jun 6 15:53:06.182: INFO: (13) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 36.5271ms) +Jun 6 15:53:06.182: INFO: (13) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 36.753733ms) +Jun 6 15:53:06.182: INFO: (13) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 36.390472ms) +Jun 6 15:53:06.197: INFO: (14) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 15.408666ms) +Jun 6 15:53:06.197: INFO: (14) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 15.580974ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 16.777052ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 16.696037ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 16.677362ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 17.234666ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 16.900519ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 17.281677ms) +Jun 6 15:53:06.199: INFO: (14) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 16.841468ms) +Jun 6 15:53:06.204: INFO: (14) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 22.518208ms) +Jun 6 15:53:06.204: INFO: (14) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 22.493574ms) +Jun 6 15:53:06.205: INFO: (14) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 22.855ms) +Jun 6 15:53:06.205: INFO: (14) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 22.802269ms) +Jun 6 15:53:06.224: INFO: (15) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 18.517505ms) +Jun 6 15:53:06.224: INFO: (15) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 18.943458ms) +Jun 6 15:53:06.224: INFO: (15) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 19.405768ms) +Jun 6 15:53:06.224: INFO: (15) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: ... (200; 22.110408ms) +Jun 6 15:53:06.228: INFO: (15) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 21.737456ms) +Jun 6 15:53:06.228: INFO: (15) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 22.096903ms) +Jun 6 15:53:06.228: INFO: (15) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 22.768067ms) +Jun 6 15:53:06.228: INFO: (15) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 22.040999ms) +Jun 6 15:53:06.228: INFO: (15) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 21.655109ms) +Jun 6 15:53:06.231: INFO: (15) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 24.102312ms) +Jun 6 15:53:06.231: INFO: (15) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 24.097849ms) +Jun 6 15:53:06.246: INFO: (16) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 15.091876ms) +Jun 6 15:53:06.246: INFO: (16) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 15.205253ms) +Jun 6 15:53:06.246: INFO: (16) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 15.211277ms) +Jun 6 15:53:06.246: INFO: (16) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test (200; 15.142296ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 15.615087ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 15.914104ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 16.137266ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 16.061396ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 16.238085ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 16.537295ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 16.087631ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 16.601374ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 16.247121ms) +Jun 6 15:53:06.247: INFO: (16) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 16.493275ms) +Jun 6 15:53:06.248: INFO: (16) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 16.310002ms) +Jun 6 15:53:06.258: INFO: (17) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 10.328027ms) +Jun 6 15:53:06.260: INFO: (17) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 11.9693ms) +Jun 6 15:53:06.260: INFO: (17) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 12.114014ms) +Jun 6 15:53:06.260: INFO: (17) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 12.108182ms) +Jun 6 15:53:06.261: INFO: (17) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 12.792774ms) +Jun 6 15:53:06.261: INFO: (17) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:1080/proxy/: test<... (200; 13.533073ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 13.985693ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 13.805496ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 13.444617ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 13.903916ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 13.767528ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 14.038317ms) +Jun 6 15:53:06.262: INFO: (17) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 14.772882ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 14.893907ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 14.669884ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 14.533525ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:160/proxy/: foo (200; 15.088628ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 15.041947ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 14.972972ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 15.091455ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 15.303385ms) +Jun 6 15:53:06.278: INFO: (18) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:443/proxy/: test<... (200; 9.293962ms) +Jun 6 15:53:06.290: INFO: (19) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:1080/proxy/: ... (200; 9.247819ms) +Jun 6 15:53:06.290: INFO: (19) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:462/proxy/: tls qux (200; 9.385251ms) +Jun 6 15:53:06.290: INFO: (19) /api/v1/namespaces/proxy-2568/pods/proxy-service-mwttg-qtvnr/proxy/: test (200; 9.791691ms) +Jun 6 15:53:06.290: INFO: (19) /api/v1/namespaces/proxy-2568/pods/https:proxy-service-mwttg-qtvnr:460/proxy/: tls baz (200; 9.98494ms) +Jun 6 15:53:06.290: INFO: (19) /api/v1/namespaces/proxy-2568/pods/http:proxy-service-mwttg-qtvnr:162/proxy/: bar (200; 9.279158ms) +Jun 6 15:53:06.292: INFO: (19) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname2/proxy/: tls qux (200; 11.476179ms) +Jun 6 15:53:06.292: INFO: (19) /api/v1/namespaces/proxy-2568/services/https:proxy-service-mwttg:tlsportname1/proxy/: tls baz (200; 12.046601ms) +Jun 6 15:53:06.293: INFO: (19) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname1/proxy/: foo (200; 12.424534ms) +Jun 6 15:53:06.293: INFO: (19) /api/v1/namespaces/proxy-2568/services/proxy-service-mwttg:portname2/proxy/: bar (200; 11.880569ms) +Jun 6 15:53:06.293: INFO: (19) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname2/proxy/: bar (200; 12.580716ms) +Jun 6 15:53:06.293: INFO: (19) /api/v1/namespaces/proxy-2568/services/http:proxy-service-mwttg:portname1/proxy/: foo (200; 12.651274ms) +STEP: deleting ReplicationController proxy-service-mwttg in namespace proxy-2568, will wait for the garbage collector to delete the pods +Jun 6 15:53:06.365: INFO: Deleting ReplicationController proxy-service-mwttg took: 17.688258ms +Jun 6 15:53:06.466: INFO: Terminating ReplicationController proxy-service-mwttg pods took: 100.556624ms +[AfterEach] version v1 + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:53:08.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-2568" for this suite. +Jun 6 15:53:15.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:53:15.241: INFO: namespace proxy-2568 deletion completed in 6.264790239s + +• [SLOW TEST:14.568 seconds] +[sig-network] Proxy +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + version v1 + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 + should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:53:15.242: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 15:53:19.417: INFO: Waiting up to 5m0s for pod "client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "pods-707" to be "success or failure" +Jun 6 15:53:19.423: INFO: Pod "client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258379ms +Jun 6 15:53:22.313: INFO: Pod "client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896092201s +Jun 6 15:53:24.323: INFO: Pod "client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.906645406s +STEP: Saw pod success +Jun 6 15:53:24.324: INFO: Pod "client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:53:24.330: INFO: Trying to get logs from node cncf-1 pod client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6 container env3cont: +STEP: delete the pod +Jun 6 15:53:24.373: INFO: Waiting for pod client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:53:24.378: INFO: Pod client-envvars-34725675-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:53:24.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-707" for this suite. +Jun 6 15:54:12.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:54:12.629: INFO: namespace pods-707 deletion completed in 48.242449677s + +• [SLOW TEST:57.387 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:54:12.629: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on tmpfs +Jun 6 15:54:12.740: INFO: Waiting up to 5m0s for pod "pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-8784" to be "success or failure" +Jun 6 15:54:12.744: INFO: Pod "pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923486ms +Jun 6 15:54:14.754: INFO: Pod "pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013798468s +Jun 6 15:54:16.764: INFO: Pod "pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023257906s +STEP: Saw pod success +Jun 6 15:54:16.764: INFO: Pod "pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:54:16.773: INFO: Trying to get logs from node cncf-2 pod pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 15:54:16.815: INFO: Waiting for pod pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:54:16.820: INFO: Pod pod-543af138-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:54:16.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8784" for this suite. +Jun 6 15:54:22.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:54:23.104: INFO: namespace emptydir-8784 deletion completed in 6.276324798s + +• [SLOW TEST:10.474 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:54:23.104: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-5a7b9a68-8873-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 15:54:23.266: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-3511" to be "success or failure" +Jun 6 15:54:23.273: INFO: Pod "pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398564ms +Jun 6 15:54:25.282: INFO: Pod "pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015119629s +Jun 6 15:54:27.292: INFO: Pod "pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025113271s +STEP: Saw pod success +Jun 6 15:54:27.292: INFO: Pod "pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:54:27.300: INFO: Trying to get logs from node cncf-1 pod pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: +STEP: delete the pod +Jun 6 15:54:27.354: INFO: Waiting for pod pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:54:27.365: INFO: Pod pod-projected-configmaps-5a7f372b-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:54:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3511" for this suite. +Jun 6 15:54:33.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:54:33.673: INFO: namespace projected-3511 deletion completed in 6.298431024s + +• [SLOW TEST:10.568 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:54:33.674: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-9195 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 6 15:54:33.784: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 6 15:54:55.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.0.102:8080/dial?request=hostName&protocol=http&host=10.2.1.45&port=8080&tries=1'] Namespace:pod-network-test-9195 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 15:54:55.964: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 15:54:56.252: INFO: Waiting for endpoints: map[] +Jun 6 15:54:56.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.0.102:8080/dial?request=hostName&protocol=http&host=10.2.0.101&port=8080&tries=1'] Namespace:pod-network-test-9195 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 15:54:56.260: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 15:54:56.530: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:54:56.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-9195" for this suite. +Jun 6 15:55:18.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:55:18.807: INFO: namespace pod-network-test-9195 deletion completed in 22.268130703s + +• [SLOW TEST:45.133 seconds] +[sig-network] Networking +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:55:18.807: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:56:18.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1967" for this suite. +Jun 6 15:56:40.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:56:41.212: INFO: namespace container-probe-1967 deletion completed in 22.271967961s + +• [SLOW TEST:82.405 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:56:41.213: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-acca4fab-8873-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 15:56:41.434: INFO: Waiting up to 5m0s for pod "pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-5070" to be "success or failure" +Jun 6 15:56:41.441: INFO: Pod "pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905449ms +Jun 6 15:56:43.451: INFO: Pod "pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016376893s +Jun 6 15:56:45.463: INFO: Pod "pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02899471s +STEP: Saw pod success +Jun 6 15:56:45.464: INFO: Pod "pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:56:45.474: INFO: Trying to get logs from node cncf-1 pod pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 15:56:45.511: INFO: Waiting for pod pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:56:45.516: INFO: Pod pod-secrets-acdb547e-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:56:45.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5070" for this suite. +Jun 6 15:56:51.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:56:51.748: INFO: namespace secrets-5070 deletion completed in 6.224911832s +STEP: Destroying namespace "secret-namespace-3437" for this suite. +Jun 6 15:56:57.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:56:58.016: INFO: namespace secret-namespace-3437 deletion completed in 6.268368879s + +• [SLOW TEST:16.803 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:56:58.017: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-b6cfcb9c-8873-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 15:56:58.149: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-7001" to be "success or failure" +Jun 6 15:56:58.155: INFO: Pod "pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047569ms +Jun 6 15:57:00.162: INFO: Pod "pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012978734s +Jun 6 15:57:02.170: INFO: Pod "pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020347557s +STEP: Saw pod success +Jun 6 15:57:02.170: INFO: Pod "pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:57:02.175: INFO: Trying to get logs from node cncf-2 pod pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6 container projected-secret-volume-test: +STEP: delete the pod +Jun 6 15:57:02.209: INFO: Waiting for pod pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:57:02.214: INFO: Pod pod-projected-secrets-b6d1dbe0-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:57:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7001" for this suite. +Jun 6 15:57:08.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:57:08.473: INFO: namespace projected-7001 deletion completed in 6.251868096s + +• [SLOW TEST:10.456 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:57:08.474: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Starting the proxy +Jun 6 15:57:08.559: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-489975799 proxy --unix-socket=/tmp/kubectl-proxy-unix623350206/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:57:08.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8341" for this suite. +Jun 6 15:57:14.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:57:14.901: INFO: namespace kubectl-8341 deletion completed in 6.252729264s + +• [SLOW TEST:6.428 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Proxy server + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:57:14.902: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Jun 6 15:57:20.113: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:57:21.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-26" for this suite. +Jun 6 15:57:45.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:57:45.703: INFO: namespace replicaset-26 deletion completed in 24.554572791s + +• [SLOW TEST:30.801 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:57:45.704: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-d33d2a02-8873-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 15:57:45.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-3949" to be "success or failure" +Jun 6 15:57:45.853: INFO: Pod "pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855604ms +Jun 6 15:57:47.861: INFO: Pod "pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015096431s +Jun 6 15:57:49.936: INFO: Pod "pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090495805s +STEP: Saw pod success +Jun 6 15:57:49.936: INFO: Pod "pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:57:49.946: INFO: Trying to get logs from node cncf-1 pod pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: +STEP: delete the pod +Jun 6 15:57:49.980: INFO: Waiting for pod pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:57:49.986: INFO: Pod pod-projected-configmaps-d33f4c8f-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:57:49.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3949" for this suite. +Jun 6 15:57:56.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:57:56.246: INFO: namespace projected-3949 deletion completed in 6.252548121s + +• [SLOW TEST:10.542 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:57:56.247: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 15:57:56.336: INFO: Creating deployment "test-recreate-deployment" +Jun 6 15:57:56.348: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Jun 6 15:57:56.360: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created +Jun 6 15:57:58.627: INFO: Waiting deployment "test-recreate-deployment" to complete +Jun 6 15:57:58.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433476, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433476, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433476, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695433476, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-7d57d5ff7c\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 15:58:00.640: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Jun 6 15:58:00.657: INFO: Updating deployment test-recreate-deployment +Jun 6 15:58:00.658: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 6 15:58:00.740: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2369,SelfLink:/apis/apps/v1/namespaces/deployment-2369/deployments/test-recreate-deployment,UID:d983241b-8873-11e9-9995-4ad9032ea524,ResourceVersion:3959115555,Generation:2,CreationTimestamp:2019-06-06 15:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-06-06 15:58:00 +0000 UTC 2019-06-06 15:58:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-06 15:58:00 +0000 UTC 2019-06-06 15:57:56 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-c9cbd8684" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 6 15:58:00.749: INFO: New ReplicaSet "test-recreate-deployment-c9cbd8684" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684,GenerateName:,Namespace:deployment-2369,SelfLink:/apis/apps/v1/namespaces/deployment-2369/replicasets/test-recreate-deployment-c9cbd8684,UID:dc1bd2fb-8873-11e9-9995-4ad9032ea524,ResourceVersion:3959115551,Generation:1,CreationTimestamp:2019-06-06 15:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d983241b-8873-11e9-9995-4ad9032ea524 0xc000abb970 0xc000abb971}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 15:58:00.749: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Jun 6 15:58:00.749: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-7d57d5ff7c,GenerateName:,Namespace:deployment-2369,SelfLink:/apis/apps/v1/namespaces/deployment-2369/replicasets/test-recreate-deployment-7d57d5ff7c,UID:d985b824-8873-11e9-9995-4ad9032ea524,ResourceVersion:3959115542,Generation:2,CreationTimestamp:2019-06-06 15:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d983241b-8873-11e9-9995-4ad9032ea524 0xc000abab67 0xc000abab68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 15:58:00.754: INFO: Pod "test-recreate-deployment-c9cbd8684-ff2vd" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684-ff2vd,GenerateName:test-recreate-deployment-c9cbd8684-,Namespace:deployment-2369,SelfLink:/api/v1/namespaces/deployment-2369/pods/test-recreate-deployment-c9cbd8684-ff2vd,UID:dc1d2c1e-8873-11e9-9995-4ad9032ea524,ResourceVersion:3959115560,Generation:0,CreationTimestamp:2019-06-06 15:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-c9cbd8684 dc1bd2fb-8873-11e9-9995-4ad9032ea524 0xc002a6ad70 0xc002a6ad71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jq9tj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jq9tj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jq9tj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a6adf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a6ae10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:58:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:58:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:58:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 15:58:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 15:58:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:00.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2369" for this suite. +Jun 6 15:58:06.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:58:07.032: INFO: namespace deployment-2369 deletion completed in 6.272110801s + +• [SLOW TEST:10.786 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl version + should check is all data is printed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:58:07.034: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check is all data is printed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 15:58:07.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 version' +Jun 6 15:58:07.261: INFO: stderr: "" +Jun 6 15:58:07.261: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.2\", GitCommit:\"66049e3b21efe110454d67df4fa62b08ea79a19b\", GitTreeState:\"clean\", BuildDate:\"2019-05-16T16:23:09Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.2\", GitCommit:\"66049e3b21efe110454d67df4fa62b08ea79a19b\", GitTreeState:\"clean\", BuildDate:\"2019-05-16T16:14:56Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:07.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3803" for this suite. +Jun 6 15:58:13.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:58:13.519: INFO: namespace kubectl-3803 deletion completed in 6.250488172s + +• [SLOW TEST:6.486 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl version + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check is all data is printed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:58:13.520: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test env composition +Jun 6 15:58:13.645: INFO: Waiting up to 5m0s for pod "var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "var-expansion-520" to be "success or failure" +Jun 6 15:58:13.650: INFO: Pod "var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.054594ms +Jun 6 15:58:15.659: INFO: Pod "var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014178615s +Jun 6 15:58:17.668: INFO: Pod "var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022699267s +STEP: Saw pod success +Jun 6 15:58:17.668: INFO: Pod "var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:58:17.675: INFO: Trying to get logs from node cncf-2 pod var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 15:58:17.730: INFO: Waiting for pod var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:58:17.739: INFO: Pod var-expansion-e3d1c25a-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:17.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-520" for this suite. +Jun 6 15:58:23.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:58:24.059: INFO: namespace var-expansion-520 deletion completed in 6.310583513s + +• [SLOW TEST:10.539 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:58:24.060: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1583 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 6 15:58:24.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7172' +Jun 6 15:58:24.582: INFO: stderr: "" +Jun 6 15:58:24.582: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod was created +[AfterEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1588 +Jun 6 15:58:24.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete pods e2e-test-nginx-pod --namespace=kubectl-7172' +Jun 6 15:58:30.707: INFO: stderr: "" +Jun 6 15:58:30.707: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:30.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7172" for this suite. +Jun 6 15:58:36.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:58:36.978: INFO: namespace kubectl-7172 deletion completed in 6.258351048s + +• [SLOW TEST:12.919 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:58:36.980: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 15:58:37.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-5978" to be "success or failure" +Jun 6 15:58:37.103: INFO: Pod "downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064672ms +Jun 6 15:58:39.110: INFO: Pod "downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021415195s +Jun 6 15:58:41.118: INFO: Pod "downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029407114s +STEP: Saw pod success +Jun 6 15:58:41.118: INFO: Pod "downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:58:41.124: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 15:58:41.155: INFO: Waiting for pod downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:58:41.160: INFO: Pod downwardapi-volume-f1ca9abc-8873-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:41.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5978" for this suite. +Jun 6 15:58:49.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:58:49.433: INFO: namespace downward-api-5978 deletion completed in 8.265253179s + +• [SLOW TEST:12.454 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:58:49.433: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl logs + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1190 +STEP: creating an rc +Jun 6 15:58:49.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-2227' +Jun 6 15:58:49.906: INFO: stderr: "" +Jun 6 15:58:49.906: INFO: stdout: "replicationcontroller/redis-master created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Waiting for Redis master to start. +Jun 6 15:58:50.925: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 15:58:50.925: INFO: Found 0 / 1 +Jun 6 15:58:51.916: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 15:58:51.916: INFO: Found 0 / 1 +Jun 6 15:58:52.920: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 15:58:52.920: INFO: Found 1 / 1 +Jun 6 15:58:52.920: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 6 15:58:52.927: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 15:58:52.927: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +STEP: checking for a matching strings +Jun 6 15:58:52.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 logs redis-master-dzr74 redis-master --namespace=kubectl-2227' +Jun 6 15:58:53.122: INFO: stderr: "" +Jun 6 15:58:53.123: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 Jun 15:58:51.949 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jun 15:58:51.949 # Server started, Redis version 3.2.12\n1:M 06 Jun 15:58:51.950 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jun 15:58:51.950 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log lines +Jun 6 15:58:53.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 log redis-master-dzr74 redis-master --namespace=kubectl-2227 --tail=1' +Jun 6 15:58:53.291: INFO: stderr: "" +Jun 6 15:58:53.291: INFO: stdout: "1:M 06 Jun 15:58:51.950 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log bytes +Jun 6 15:58:53.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 log redis-master-dzr74 redis-master --namespace=kubectl-2227 --limit-bytes=1' +Jun 6 15:58:53.461: INFO: stderr: "" +Jun 6 15:58:53.461: INFO: stdout: " " +STEP: exposing timestamps +Jun 6 15:58:53.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 log redis-master-dzr74 redis-master --namespace=kubectl-2227 --tail=1 --timestamps' +Jun 6 15:58:53.631: INFO: stderr: "" +Jun 6 15:58:53.631: INFO: stdout: "2019-06-06T15:58:51.950522075Z 1:M 06 Jun 15:58:51.950 * The server is now ready to accept connections on port 6379\n" +STEP: restricting to a time range +Jun 6 15:58:56.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 log redis-master-dzr74 redis-master --namespace=kubectl-2227 --since=1s' +Jun 6 15:58:56.305: INFO: stderr: "" +Jun 6 15:58:56.305: INFO: stdout: "" +Jun 6 15:58:56.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 log redis-master-dzr74 redis-master --namespace=kubectl-2227 --since=24h' +Jun 6 15:58:56.467: INFO: stderr: "" +Jun 6 15:58:56.467: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 Jun 15:58:51.949 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jun 15:58:51.949 # Server started, Redis version 3.2.12\n1:M 06 Jun 15:58:51.950 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jun 15:58:51.950 * The server is now ready to accept connections on port 6379\n" +[AfterEach] [k8s.io] Kubectl logs + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1196 +STEP: using delete to clean up resources +Jun 6 15:58:56.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-2227' +Jun 6 15:58:56.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 15:58:56.618: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" +Jun 6 15:58:56.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=nginx --no-headers --namespace=kubectl-2227' +Jun 6 15:58:56.779: INFO: stderr: "No resources found.\n" +Jun 6 15:58:56.779: INFO: stdout: "" +Jun 6 15:58:56.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=nginx --namespace=kubectl-2227 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 15:58:56.928: INFO: stderr: "" +Jun 6 15:58:56.928: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:58:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2227" for this suite. +Jun 6 15:59:02.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:59:03.219: INFO: namespace kubectl-2227 deletion completed in 6.280934721s + +• [SLOW TEST:13.786 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl logs + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:59:03.220: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-0170eef2-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 15:59:03.353: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-1530" to be "success or failure" +Jun 6 15:59:03.359: INFO: Pod "pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120584ms +Jun 6 15:59:05.369: INFO: Pod "pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01650905s +Jun 6 15:59:07.378: INFO: Pod "pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025237196s +STEP: Saw pod success +Jun 6 15:59:07.378: INFO: Pod "pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 15:59:07.386: INFO: Trying to get logs from node cncf-2 pod pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: +STEP: delete the pod +Jun 6 15:59:07.426: INFO: Waiting for pod pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 15:59:07.436: INFO: Pod pod-projected-configmaps-0172890e-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:59:07.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1530" for this suite. +Jun 6 15:59:13.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 15:59:13.687: INFO: namespace projected-1530 deletion completed in 6.242086781s + +• [SLOW TEST:10.468 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 15:59:13.691: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-491 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 6 15:59:13.779: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 6 15:59:37.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.0.111:8080/dial?request=hostName&protocol=udp&host=10.2.1.51&port=8081&tries=1'] Namespace:pod-network-test-491 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 15:59:37.966: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 15:59:38.171: INFO: Waiting for endpoints: map[] +Jun 6 15:59:38.179: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.2.0.111:8080/dial?request=hostName&protocol=udp&host=10.2.0.110&port=8081&tries=1'] Namespace:pod-network-test-491 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 15:59:38.179: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 15:59:38.385: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 15:59:38.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-491" for this suite. +Jun 6 16:00:00.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:00:00.658: INFO: namespace pod-network-test-491 deletion completed in 22.2603749s + +• [SLOW TEST:46.967 seconds] +[sig-network] Networking +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:00:00.659: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating service endpoint-test2 in namespace services-9183 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9183 to expose endpoints map[] +Jun 6 16:00:00.781: INFO: successfully validated that service endpoint-test2 in namespace services-9183 exposes endpoints map[] (6.653207ms elapsed) +STEP: Creating pod pod1 in namespace services-9183 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9183 to expose endpoints map[pod1:[80]] +Jun 6 16:00:04.871: INFO: successfully validated that service endpoint-test2 in namespace services-9183 exposes endpoints map[pod1:[80]] (4.075395447s elapsed) +STEP: Creating pod pod2 in namespace services-9183 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9183 to expose endpoints map[pod1:[80] pod2:[80]] +Jun 6 16:00:07.967: INFO: successfully validated that service endpoint-test2 in namespace services-9183 exposes endpoints map[pod1:[80] pod2:[80]] (3.082801403s elapsed) +STEP: Deleting pod pod1 in namespace services-9183 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9183 to expose endpoints map[pod2:[80]] +Jun 6 16:00:08.008: INFO: successfully validated that service endpoint-test2 in namespace services-9183 exposes endpoints map[pod2:[80]] (26.864133ms elapsed) +STEP: Deleting pod pod2 in namespace services-9183 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9183 to expose endpoints map[] +Jun 6 16:00:09.033: INFO: successfully validated that service endpoint-test2 in namespace services-9183 exposes endpoints map[] (1.013471768s elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:00:09.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9183" for this suite. +Jun 6 16:00:31.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:00:31.370: INFO: namespace services-9183 deletion completed in 22.281275991s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:30.711 seconds] +[sig-network] Services +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:00:31.371: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-35fcd485-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:00:31.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-8576" to be "success or failure" +Jun 6 16:00:31.521: INFO: Pod "pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.227185ms +Jun 6 16:00:33.530: INFO: Pod "pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018748205s +Jun 6 16:00:35.539: INFO: Pod "pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027720395s +STEP: Saw pod success +Jun 6 16:00:35.539: INFO: Pod "pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:00:35.547: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:00:35.594: INFO: Waiting for pod pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:00:35.600: INFO: Pod pod-configmaps-35fe5b81-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:00:35.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8576" for this suite. +Jun 6 16:00:41.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:00:41.895: INFO: namespace configmap-8576 deletion completed in 6.286630147s + +• [SLOW TEST:10.524 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:00:41.895: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 6 16:00:42.036: INFO: Waiting up to 5m0s for pod "pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-2831" to be "success or failure" +Jun 6 16:00:42.043: INFO: Pod "pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073959ms +Jun 6 16:00:44.052: INFO: Pod "pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015641053s +Jun 6 16:00:46.060: INFO: Pod "pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023474151s +STEP: Saw pod success +Jun 6 16:00:46.060: INFO: Pod "pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:00:46.069: INFO: Trying to get logs from node cncf-2 pod pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:00:46.117: INFO: Waiting for pod pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:00:46.125: INFO: Pod pod-3c436875-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:00:46.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2831" for this suite. +Jun 6 16:00:52.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:00:52.337: INFO: namespace emptydir-2831 deletion completed in 6.206558235s + +• [SLOW TEST:10.442 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:00:52.339: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-42763208-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:00:52.435: INFO: Waiting up to 5m0s for pod "pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-369" to be "success or failure" +Jun 6 16:00:52.446: INFO: Pod "pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.062299ms +Jun 6 16:00:54.452: INFO: Pod "pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017432653s +Jun 6 16:00:56.461: INFO: Pod "pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025987043s +STEP: Saw pod success +Jun 6 16:00:56.461: INFO: Pod "pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:00:56.470: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:00:56.502: INFO: Waiting for pod pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:00:56.508: INFO: Pod pod-configmaps-42778c0a-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:00:56.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-369" for this suite. +Jun 6 16:01:02.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:01:02.722: INFO: namespace configmap-369 deletion completed in 6.207864008s + +• [SLOW TEST:10.384 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:01:02.723: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-751 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 6 16:01:02.837: INFO: Found 0 stateful pods, waiting for 3 +Jun 6 16:01:12.847: INFO: Found 1 stateful pods, waiting for 3 +Jun 6 16:01:22.847: INFO: Found 1 stateful pods, waiting for 3 +Jun 6 16:01:32.845: INFO: Found 2 stateful pods, waiting for 3 +Jun 6 16:01:42.845: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:01:42.845: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:01:42.845: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 6 16:01:42.905: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Jun 6 16:01:52.967: INFO: Updating stateful set ss2 +Jun 6 16:01:52.980: INFO: Waiting for Pod statefulset-751/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 16:02:02.994: INFO: Waiting for Pod statefulset-751/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 16:02:13.000: INFO: Waiting for Pod statefulset-751/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +STEP: Restoring Pods to the correct revision when they are deleted +Jun 6 16:02:23.046: INFO: Found 2 stateful pods, waiting for 3 +Jun 6 16:02:33.305: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:02:33.305: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:02:33.305: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Jun 6 16:02:33.361: INFO: Updating stateful set ss2 +Jun 6 16:02:33.382: INFO: Waiting for Pod statefulset-751/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 16:02:43.432: INFO: Updating stateful set ss2 +Jun 6 16:02:43.446: INFO: Waiting for StatefulSet statefulset-751/ss2 to complete update +Jun 6 16:02:43.446: INFO: Waiting for Pod statefulset-751/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 6 16:02:53.462: INFO: Deleting all statefulset in ns statefulset-751 +Jun 6 16:02:53.470: INFO: Scaling statefulset ss2 to 0 +Jun 6 16:03:03.508: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:03:03.515: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:03:03.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-751" for this suite. +Jun 6 16:03:09.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:03:09.867: INFO: namespace statefulset-751 deletion completed in 6.309764746s + +• [SLOW TEST:127.144 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:03:09.868: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating Redis RC +Jun 6 16:03:09.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-4723' +Jun 6 16:03:10.234: INFO: stderr: "" +Jun 6 16:03:10.234: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 6 16:03:11.243: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:03:11.243: INFO: Found 0 / 1 +Jun 6 16:03:12.242: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:03:12.242: INFO: Found 0 / 1 +Jun 6 16:03:13.247: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:03:13.247: INFO: Found 1 / 1 +Jun 6 16:03:13.247: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Jun 6 16:03:13.254: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:03:13.254: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 6 16:03:13.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 patch pod redis-master-9tqfv --namespace=kubectl-4723 -p {"metadata":{"annotations":{"x":"y"}}}' +Jun 6 16:03:13.434: INFO: stderr: "" +Jun 6 16:03:13.434: INFO: stdout: "pod/redis-master-9tqfv patched\n" +STEP: checking annotations +Jun 6 16:03:13.446: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:03:13.446: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:03:13.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4723" for this suite. +Jun 6 16:03:35.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:03:35.739: INFO: namespace kubectl-4723 deletion completed in 22.283213408s + +• [SLOW TEST:25.872 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl patch + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:03:35.740: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-a3e1deee-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Updating configmap configmap-test-upd-a3e1deee-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:03:41.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3345" for this suite. +Jun 6 16:04:06.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:04:06.239: INFO: namespace configmap-3345 deletion completed in 24.243656551s + +• [SLOW TEST:30.499 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:04:06.242: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a replication controller +Jun 6 16:04:06.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-5025' +Jun 6 16:04:06.587: INFO: stderr: "" +Jun 6 16:04:06.587: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 6 16:04:06.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5025' +Jun 6 16:04:06.743: INFO: stderr: "" +Jun 6 16:04:06.743: INFO: stdout: "update-demo-nautilus-5vwrf update-demo-nautilus-gqszr " +Jun 6 16:04:06.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vwrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5025' +Jun 6 16:04:06.885: INFO: stderr: "" +Jun 6 16:04:06.885: INFO: stdout: "" +Jun 6 16:04:06.885: INFO: update-demo-nautilus-5vwrf is created but not running +Jun 6 16:04:11.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5025' +Jun 6 16:04:12.047: INFO: stderr: "" +Jun 6 16:04:12.047: INFO: stdout: "update-demo-nautilus-5vwrf update-demo-nautilus-gqszr " +Jun 6 16:04:12.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vwrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5025' +Jun 6 16:04:12.165: INFO: stderr: "" +Jun 6 16:04:12.165: INFO: stdout: "true" +Jun 6 16:04:12.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vwrf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5025' +Jun 6 16:04:12.307: INFO: stderr: "" +Jun 6 16:04:12.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:04:12.307: INFO: validating pod update-demo-nautilus-5vwrf +Jun 6 16:04:12.324: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:04:12.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:04:12.324: INFO: update-demo-nautilus-5vwrf is verified up and running +Jun 6 16:04:12.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-gqszr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5025' +Jun 6 16:04:12.462: INFO: stderr: "" +Jun 6 16:04:12.462: INFO: stdout: "true" +Jun 6 16:04:12.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-gqszr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5025' +Jun 6 16:04:12.636: INFO: stderr: "" +Jun 6 16:04:12.636: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:04:12.636: INFO: validating pod update-demo-nautilus-gqszr +Jun 6 16:04:12.648: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:04:12.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:04:12.648: INFO: update-demo-nautilus-gqszr is verified up and running +STEP: using delete to clean up resources +Jun 6 16:04:12.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-5025' +Jun 6 16:04:12.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:04:12.790: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 6 16:04:12.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5025' +Jun 6 16:04:12.943: INFO: stderr: "No resources found.\n" +Jun 6 16:04:12.943: INFO: stdout: "" +Jun 6 16:04:12.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=update-demo --namespace=kubectl-5025 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 16:04:13.091: INFO: stderr: "" +Jun 6 16:04:13.091: INFO: stdout: "update-demo-nautilus-5vwrf\nupdate-demo-nautilus-gqszr\n" +Jun 6 16:04:13.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5025' +Jun 6 16:04:14.412: INFO: stderr: "No resources found.\n" +Jun 6 16:04:14.412: INFO: stdout: "" +Jun 6 16:04:14.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=update-demo --namespace=kubectl-5025 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 16:04:14.577: INFO: stderr: "" +Jun 6 16:04:14.577: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:04:14.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5025" for this suite. +Jun 6 16:04:36.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:04:36.892: INFO: namespace kubectl-5025 deletion completed in 22.305501364s + +• [SLOW TEST:30.651 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create and stop a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:04:36.892: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-configmap-rcld +STEP: Creating a pod to test atomic-volume-subpath +Jun 6 16:04:37.047: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rcld" in namespace "subpath-2068" to be "success or failure" +Jun 6 16:04:37.054: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823128ms +Jun 6 16:04:39.069: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021638279s +Jun 6 16:04:41.076: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 4.028950273s +Jun 6 16:04:43.084: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 6.036710568s +Jun 6 16:04:45.093: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 8.045859784s +Jun 6 16:04:47.102: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 10.054329237s +Jun 6 16:04:49.112: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 12.064428551s +Jun 6 16:04:51.121: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 14.073289751s +Jun 6 16:04:53.130: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 16.082245011s +Jun 6 16:04:55.140: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 18.092668681s +Jun 6 16:04:57.149: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 20.101520403s +Jun 6 16:04:59.160: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Running", Reason="", readiness=true. Elapsed: 22.112329728s +Jun 6 16:05:01.351: INFO: Pod "pod-subpath-test-configmap-rcld": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.303130783s +STEP: Saw pod success +Jun 6 16:05:01.351: INFO: Pod "pod-subpath-test-configmap-rcld" satisfied condition "success or failure" +Jun 6 16:05:01.372: INFO: Trying to get logs from node cncf-2 pod pod-subpath-test-configmap-rcld container test-container-subpath-configmap-rcld: +STEP: delete the pod +Jun 6 16:05:01.420: INFO: Waiting for pod pod-subpath-test-configmap-rcld to disappear +Jun 6 16:05:01.426: INFO: Pod pod-subpath-test-configmap-rcld no longer exists +STEP: Deleting pod pod-subpath-test-configmap-rcld +Jun 6 16:05:01.426: INFO: Deleting pod "pod-subpath-test-configmap-rcld" in namespace "subpath-2068" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:05:01.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2068" for this suite. +Jun 6 16:05:07.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:05:07.707: INFO: namespace subpath-2068 deletion completed in 6.265178501s + +• [SLOW TEST:30.815 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:05:07.708: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-dab14d1c-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:05:07.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-6752" to be "success or failure" +Jun 6 16:05:07.868: INFO: Pod "pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.319562ms +Jun 6 16:05:09.875: INFO: Pod "pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030333288s +Jun 6 16:05:11.885: INFO: Pod "pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040169279s +STEP: Saw pod success +Jun 6 16:05:11.886: INFO: Pod "pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:05:11.894: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:05:11.937: INFO: Waiting for pod pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:05:11.943: INFO: Pod pod-configmaps-dab3217c-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:05:11.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6752" for this suite. +Jun 6 16:05:17.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:05:18.227: INFO: namespace configmap-6752 deletion completed in 6.27600196s + +• [SLOW TEST:10.519 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:05:18.227: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-map-e0f4891b-8874-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:05:18.347: INFO: Waiting up to 5m0s for pod "pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-4760" to be "success or failure" +Jun 6 16:05:18.358: INFO: Pod "pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.249903ms +Jun 6 16:05:20.366: INFO: Pod "pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018787891s +Jun 6 16:05:22.373: INFO: Pod "pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025188256s +STEP: Saw pod success +Jun 6 16:05:22.373: INFO: Pod "pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:05:22.380: INFO: Trying to get logs from node cncf-2 pod pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 16:05:22.417: INFO: Waiting for pod pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:05:22.423: INFO: Pod pod-secrets-e0f66f6f-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:05:22.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4760" for this suite. +Jun 6 16:05:28.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:05:28.660: INFO: namespace secrets-4760 deletion completed in 6.228818601s + +• [SLOW TEST:10.433 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:05:28.661: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:05:28.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-2438" to be "success or failure" +Jun 6 16:05:28.767: INFO: Pod "downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.057896ms +Jun 6 16:05:30.776: INFO: Pod "downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021140825s +Jun 6 16:05:32.788: INFO: Pod "downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032347926s +STEP: Saw pod success +Jun 6 16:05:32.788: INFO: Pod "downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:05:32.795: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:05:32.844: INFO: Waiting for pod downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:05:32.849: INFO: Pod downwardapi-volume-e7299367-8874-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:05:32.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2438" for this suite. +Jun 6 16:05:38.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:05:39.161: INFO: namespace downward-api-2438 deletion completed in 6.305088105s + +• [SLOW TEST:10.500 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:05:39.161: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Jun 6 16:05:39.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7795,SelfLink:/api/v1/namespaces/watch-7795/configmaps/e2e-watch-test-watch-closed,UID:ed7422bc-8874-11e9-9995-4ad9032ea524,ResourceVersion:3959198527,Generation:0,CreationTimestamp:2019-06-06 16:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 6 16:05:39.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7795,SelfLink:/api/v1/namespaces/watch-7795/configmaps/e2e-watch-test-watch-closed,UID:ed7422bc-8874-11e9-9995-4ad9032ea524,ResourceVersion:3959198530,Generation:0,CreationTimestamp:2019-06-06 16:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Jun 6 16:05:39.356: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7795,SelfLink:/api/v1/namespaces/watch-7795/configmaps/e2e-watch-test-watch-closed,UID:ed7422bc-8874-11e9-9995-4ad9032ea524,ResourceVersion:3959198533,Generation:0,CreationTimestamp:2019-06-06 16:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 6 16:05:39.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7795,SelfLink:/api/v1/namespaces/watch-7795/configmaps/e2e-watch-test-watch-closed,UID:ed7422bc-8874-11e9-9995-4ad9032ea524,ResourceVersion:3959198538,Generation:0,CreationTimestamp:2019-06-06 16:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:05:39.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-7795" for this suite. +Jun 6 16:05:45.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:05:45.644: INFO: namespace watch-7795 deletion completed in 6.277854453s + +• [SLOW TEST:6.483 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:05:45.644: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jun 6 16:05:53.819: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:05:53.826: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:05:55.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:05:55.836: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:05:57.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:05:57.835: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:05:59.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:05:59.836: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:01.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:01.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:03.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:03.832: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:05.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:05.832: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:07.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:07.836: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:09.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:09.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:11.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:11.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:13.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:13.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:15.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:15.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:17.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:17.834: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:19.828: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:19.835: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 6 16:06:21.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 6 16:06:21.834: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:06:21.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3456" for this suite. +Jun 6 16:06:43.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:06:44.119: INFO: namespace container-lifecycle-hook-3456 deletion completed in 22.247269614s + +• [SLOW TEST:58.475 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:06:44.119: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jun 6 16:06:44.229: INFO: Waiting up to 5m0s for pod "pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-652" to be "success or failure" +Jun 6 16:06:44.234: INFO: Pod "pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068461ms +Jun 6 16:06:46.244: INFO: Pod "pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014918647s +Jun 6 16:06:48.251: INFO: Pod "pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021984668s +STEP: Saw pod success +Jun 6 16:06:48.251: INFO: Pod "pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:06:48.257: INFO: Trying to get logs from node cncf-2 pod pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:06:48.290: INFO: Waiting for pod pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:06:48.297: INFO: Pod pod-14270e8f-8875-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:06:48.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-652" for this suite. +Jun 6 16:06:54.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:06:54.571: INFO: namespace emptydir-652 deletion completed in 6.2656767s + +• [SLOW TEST:10.452 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:06:54.572: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-360 +Jun 6 16:06:58.704: INFO: Started pod liveness-http in namespace container-probe-360 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 6 16:06:58.711: INFO: Initial restart count of pod liveness-http is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:10:59.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-360" for this suite. +Jun 6 16:11:05.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:11:06.000: INFO: namespace container-probe-360 deletion completed in 6.287765516s + +• [SLOW TEST:251.429 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:11:06.000: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap that has name configmap-test-emptyKey-b03fe601-8875-11e9-b3bf-0e7bbe1a64f6 +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:11:06.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-386" for this suite. +Jun 6 16:11:12.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:11:12.381: INFO: namespace configmap-386 deletion completed in 6.2661094s + +• [SLOW TEST:6.381 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should fail to create ConfigMap with empty key [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:11:12.381: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:11:12.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-3139" to be "success or failure" +Jun 6 16:11:12.505: INFO: Pod "downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423841ms +Jun 6 16:11:14.515: INFO: Pod "downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016010719s +Jun 6 16:11:16.524: INFO: Pod "downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024932495s +STEP: Saw pod success +Jun 6 16:11:16.524: INFO: Pod "downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:11:16.531: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:11:16.565: INFO: Waiting for pod downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:11:16.570: INFO: Pod downwardapi-volume-b40c9ad1-8875-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:11:16.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3139" for this suite. +Jun 6 16:11:22.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:11:22.837: INFO: namespace projected-3139 deletion completed in 6.258838477s + +• [SLOW TEST:10.455 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:11:22.837: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-9101 +[It] Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating stateful set ss in namespace statefulset-9101 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9101 +Jun 6 16:11:22.971: INFO: Found 0 stateful pods, waiting for 1 +Jun 6 16:11:32.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Jun 6 16:11:32.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:11:33.424: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:11:33.424: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:11:33.424: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:11:33.512: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jun 6 16:11:43.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:11:43.521: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:11:43.556: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:11:43.556: INFO: ss-0 cncf-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:11:43.556: INFO: +Jun 6 16:11:43.556: INFO: StatefulSet ss has not reached scale 3, at 1 +Jun 6 16:11:44.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991697219s +Jun 6 16:11:45.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984480687s +Jun 6 16:11:46.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974447714s +Jun 6 16:11:47.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965596609s +Jun 6 16:11:48.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.955989749s +Jun 6 16:11:49.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943075588s +Jun 6 16:11:50.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.933866075s +Jun 6 16:11:51.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923425582s +Jun 6 16:11:52.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 914.200721ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9101 +Jun 6 16:11:53.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:11:54.005: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 16:11:54.005: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:11:54.005: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:11:54.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:11:54.368: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 6 16:11:54.369: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:11:54.369: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:11:54.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:11:54.743: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 6 16:11:54.743: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:11:54.743: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:11:54.751: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:11:54.751: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:11:54.751: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Jun 6 16:11:54.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:11:55.155: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:11:55.155: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:11:55.155: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:11:55.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:11:55.480: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:11:55.480: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:11:55.480: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:11:55.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:11:55.843: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:11:55.843: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:11:55.843: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:11:55.843: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:11:55.850: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jun 6 16:12:05.868: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:12:05.868: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:12:05.868: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:12:05.910: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:05.910: INFO: ss-0 cncf-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:12:05.910: INFO: ss-1 cncf-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:05.910: INFO: ss-2 cncf-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:05.910: INFO: +Jun 6 16:12:05.910: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 6 16:12:06.919: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:06.919: INFO: ss-0 cncf-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:12:06.919: INFO: ss-1 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:06.919: INFO: ss-2 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:06.919: INFO: +Jun 6 16:12:06.920: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 6 16:12:07.929: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:07.929: INFO: ss-0 cncf-1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:12:07.929: INFO: ss-1 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:07.929: INFO: ss-2 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:07.929: INFO: +Jun 6 16:12:07.929: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 6 16:12:08.939: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:08.939: INFO: ss-0 cncf-1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:12:08.939: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:08.939: INFO: ss-2 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:08.939: INFO: +Jun 6 16:12:08.939: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 6 16:12:09.948: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:09.948: INFO: ss-0 cncf-1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:22 +0000 UTC }] +Jun 6 16:12:09.948: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:09.948: INFO: ss-2 cncf-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:09.948: INFO: +Jun 6 16:12:09.948: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 6 16:12:10.957: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:10.957: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:10.957: INFO: +Jun 6 16:12:10.957: INFO: StatefulSet ss has not reached scale 0, at 1 +Jun 6 16:12:11.966: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:11.966: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:11.966: INFO: +Jun 6 16:12:11.966: INFO: StatefulSet ss has not reached scale 0, at 1 +Jun 6 16:12:12.975: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:12.975: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:12.975: INFO: +Jun 6 16:12:12.975: INFO: StatefulSet ss has not reached scale 0, at 1 +Jun 6 16:12:13.985: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:13.985: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:13.985: INFO: +Jun 6 16:12:13.985: INFO: StatefulSet ss has not reached scale 0, at 1 +Jun 6 16:12:14.993: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 6 16:12:14.993: INFO: ss-1 cncf-2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:11:43 +0000 UTC }] +Jun 6 16:12:14.993: INFO: +Jun 6 16:12:14.993: INFO: StatefulSet ss has not reached scale 0, at 1 +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9101 +Jun 6 16:12:16.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:12:16.291: INFO: rc: 1 +Jun 6 16:12:16.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") + [] 0xc0022b39e0 exit status 1 true [0xc002cca550 0xc002cca578 0xc002cca598] [0xc002cca550 0xc002cca578 0xc002cca598] [0xc002cca570 0xc002cca590] [0x9c00a0 0x9c00a0] 0xc00285e780 }: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("nginx") + +error: +exit status 1 + +Jun 6 16:12:26.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:12:26.406: INFO: rc: 1 +Jun 6 16:12:26.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0029d61b0 exit status 1 true [0xc0023c6eb0 0xc0023c6f58 0xc0023c6fd0] [0xc0023c6eb0 0xc0023c6f58 0xc0023c6fd0] [0xc0023c6ed8 0xc0023c6fa0] [0x9c00a0 0x9c00a0] 0xc0020580c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:12:36.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:12:36.563: INFO: rc: 1 +Jun 6 16:12:36.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0022b3e00 exit status 1 true [0xc002cca5a0 0xc002cca5b8 0xc002cca5d0] [0xc002cca5a0 0xc002cca5b8 0xc002cca5d0] [0xc002cca5b0 0xc002cca5c8] [0x9c00a0 0x9c00a0] 0xc00285ecc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:12:46.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:12:47.271: INFO: rc: 1 +Jun 6 16:12:47.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0029d6570 exit status 1 true [0xc0023c6fe0 0xc0023c7040 0xc0023c7098] [0xc0023c6fe0 0xc0023c7040 0xc0023c7098] [0xc0023c7030 0xc0023c7060] [0x9c00a0 0x9c00a0] 0xc002059440 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:12:57.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:12:57.402: INFO: rc: 1 +Jun 6 16:12:57.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc002b4c1e0 exit status 1 true [0xc002cca5d8 0xc002cca5f0 0xc002cca608] [0xc002cca5d8 0xc002cca5f0 0xc002cca608] [0xc002cca5e8 0xc002cca600] [0x9c00a0 0x9c00a0] 0xc00285f4a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:07.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:07.540: INFO: rc: 1 +Jun 6 16:13:07.540: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc002b4c540 exit status 1 true [0xc002cca610 0xc002cca628 0xc002cca640] [0xc002cca610 0xc002cca628 0xc002cca640] [0xc002cca620 0xc002cca638] [0x9c00a0 0x9c00a0] 0xc00285ff80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:17.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:17.662: INFO: rc: 1 +Jun 6 16:13:17.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc002b4c8a0 exit status 1 true [0xc002cca648 0xc002cca660 0xc002cca680] [0xc002cca648 0xc002cca660 0xc002cca680] [0xc002cca658 0xc002cca670] [0x9c00a0 0x9c00a0] 0xc000f36720 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:27.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:27.792: INFO: rc: 1 +Jun 6 16:13:27.793: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc002b4cc00 exit status 1 true [0xc002cca698 0xc002cca6b0 0xc002cca6c8] [0xc002cca698 0xc002cca6b0 0xc002cca6c8] [0xc002cca6a8 0xc002cca6c0] [0x9c00a0 0x9c00a0] 0xc000f36c00 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:37.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:37.929: INFO: rc: 1 +Jun 6 16:13:37.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc002b4cf90 exit status 1 true [0xc002cca6d0 0xc002cca6e8 0xc002cca700] [0xc002cca6d0 0xc002cca6e8 0xc002cca700] [0xc002cca6e0 0xc002cca6f8] [0x9c00a0 0x9c00a0] 0xc000f36f60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:47.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:48.058: INFO: rc: 1 +Jun 6 16:13:48.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0022b37d0 exit status 1 true [0xc0023c6028 0xc0023c60a0 0xc0023c6160] [0xc0023c6028 0xc0023c60a0 0xc0023c6160] [0xc0023c6090 0xc0023c6100] [0x9c00a0 0x9c00a0] 0xc002058fc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:13:58.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:13:58.194: INFO: rc: 1 +Jun 6 16:13:58.194: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f0e0 exit status 1 true [0xc002cca010 0xc002cca050 0xc002cca068] [0xc002cca010 0xc002cca050 0xc002cca068] [0xc002cca038 0xc002cca060] [0x9c00a0 0x9c00a0] 0xc00285e300 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:08.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:08.322: INFO: rc: 1 +Jun 6 16:14:08.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0022b3c50 exit status 1 true [0xc0023c6188 0xc0023c6270 0xc0023c62e8] [0xc0023c6188 0xc0023c6270 0xc0023c62e8] [0xc0023c6228 0xc0023c62b0] [0x9c00a0 0x9c00a0] 0xc002bbc300 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:18.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:18.445: INFO: rc: 1 +Jun 6 16:14:18.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f440 exit status 1 true [0xc002cca070 0xc002cca088 0xc002cca0c8] [0xc002cca070 0xc002cca088 0xc002cca0c8] [0xc002cca080 0xc002cca0a8] [0x9c00a0 0x9c00a0] 0xc00285e8a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:28.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:28.572: INFO: rc: 1 +Jun 6 16:14:28.572: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f7d0 exit status 1 true [0xc002cca0e0 0xc002cca0f8 0xc002cca130] [0xc002cca0e0 0xc002cca0f8 0xc002cca130] [0xc002cca0f0 0xc002cca118] [0x9c00a0 0x9c00a0] 0xc00285ed80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:38.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:38.710: INFO: rc: 1 +Jun 6 16:14:38.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6fb90 exit status 1 true [0xc002cca138 0xc002cca150 0xc002cca190] [0xc002cca138 0xc002cca150 0xc002cca190] [0xc002cca148 0xc002cca170] [0x9c00a0 0x9c00a0] 0xc00285f620 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:48.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:48.832: INFO: rc: 1 +Jun 6 16:14:48.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6ff20 exit status 1 true [0xc002cca1a8 0xc002cca1d8 0xc002cca218] [0xc002cca1a8 0xc002cca1d8 0xc002cca218] [0xc002cca1d0 0xc002cca1f8] [0x9c00a0 0x9c00a0] 0xc003034240 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:14:58.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:14:58.939: INFO: rc: 1 +Jun 6 16:14:58.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0016ba090 exit status 1 true [0xc0023c62f0 0xc0023c6380 0xc0023c63a8] [0xc0023c62f0 0xc0023c6380 0xc0023c63a8] [0xc0023c6338 0xc0023c63a0] [0x9c00a0 0x9c00a0] 0xc002bbca80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:08.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:09.056: INFO: rc: 1 +Jun 6 16:15:09.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00197e300 exit status 1 true [0xc002cca230 0xc002cca248 0xc002cca288] [0xc002cca230 0xc002cca248 0xc002cca288] [0xc002cca240 0xc002cca280] [0x9c00a0 0x9c00a0] 0xc003034b40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:19.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:19.192: INFO: rc: 1 +Jun 6 16:15:19.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0016ba420 exit status 1 true [0xc0023c63b0 0xc0023c63d0 0xc0023c6460] [0xc0023c63b0 0xc0023c63d0 0xc0023c6460] [0xc0023c63c0 0xc0023c6418] [0x9c00a0 0x9c00a0] 0xc002bbd1a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:29.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:29.330: INFO: rc: 1 +Jun 6 16:15:29.330: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0016ba7b0 exit status 1 true [0xc0023c6480 0xc0023c64d8 0xc0023c6528] [0xc0023c6480 0xc0023c64d8 0xc0023c6528] [0xc0023c64a8 0xc0023c6518] [0x9c00a0 0x9c00a0] 0xc002bbdb60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:39.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:39.448: INFO: rc: 1 +Jun 6 16:15:39.449: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0016bab10 exit status 1 true [0xc0023c6538 0xc0023c65e8 0xc0023c6610] [0xc0023c6538 0xc0023c65e8 0xc0023c6610] [0xc0023c65d8 0xc0023c6608] [0x9c00a0 0x9c00a0] 0xc002c84120 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:49.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:49.588: INFO: rc: 1 +Jun 6 16:15:49.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f110 exit status 1 true [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca050 0xc002cca068] [0x9c00a0 0x9c00a0] 0xc002bbc480 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:15:59.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:15:59.734: INFO: rc: 1 +Jun 6 16:15:59.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0022b3800 exit status 1 true [0xc0023c6008 0xc0023c6090 0xc0023c6100] [0xc0023c6008 0xc0023c6090 0xc0023c6100] [0xc0023c6060 0xc0023c60b0] [0x9c00a0 0x9c00a0] 0xc00285e300 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:16:09.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:16:09.854: INFO: rc: 1 +Jun 6 16:16:09.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0022b3c20 exit status 1 true [0xc0023c6160 0xc0023c6228 0xc0023c62b0] [0xc0023c6160 0xc0023c6228 0xc0023c62b0] [0xc0023c61d0 0xc0023c6280] [0x9c00a0 0x9c00a0] 0xc00285e8a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:16:19.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:16:19.984: INFO: rc: 1 +Jun 6 16:16:19.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f4d0 exit status 1 true [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca088 0xc002cca0c8] [0x9c00a0 0x9c00a0] 0xc002bbcc00 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:16:29.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:16:30.109: INFO: rc: 1 +Jun 6 16:16:30.109: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6f8c0 exit status 1 true [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0f8 0xc002cca130] [0x9c00a0 0x9c00a0] 0xc002bbd3e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:16:40.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:16:40.236: INFO: rc: 1 +Jun 6 16:16:40.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00197e090 exit status 1 true [0xc0023c62e8 0xc0023c6338 0xc0023c63a0] [0xc0023c62e8 0xc0023c6338 0xc0023c63a0] [0xc0023c62f8 0xc0023c6390] [0x9c00a0 0x9c00a0] 0xc00285ed80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:16:50.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:16:50.361: INFO: rc: 1 +Jun 6 16:16:50.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc001e6fc50 exit status 1 true [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca150 0xc002cca190] [0x9c00a0 0x9c00a0] 0xc002bbdce0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:17:00.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:17:00.483: INFO: rc: 1 +Jun 6 16:17:00.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc00197e450 exit status 1 true [0xc0023c63a8 0xc0023c63c0 0xc0023c6418] [0xc0023c63a8 0xc0023c63c0 0xc0023c6418] [0xc0023c63b8 0xc0023c63e0] [0x9c00a0 0x9c00a0] 0xc00285f620 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:17:10.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:17:10.598: INFO: rc: 1 +Jun 6 16:17:10.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found + [] 0xc0016ba030 exit status 1 true [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1d8 0xc002cca218] [0x9c00a0 0x9c00a0] 0xc002058660 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 + +Jun 6 16:17:20.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-9101 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:17:20.738: INFO: rc: 1 +Jun 6 16:17:20.738: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: +Jun 6 16:17:20.738: INFO: Scaling statefulset ss to 0 +Jun 6 16:17:20.765: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 6 16:17:20.771: INFO: Deleting all statefulset in ns statefulset-9101 +Jun 6 16:17:20.777: INFO: Scaling statefulset ss to 0 +Jun 6 16:17:20.800: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:17:20.806: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:17:20.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9101" for this suite. +Jun 6 16:17:26.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:17:27.127: INFO: namespace statefulset-9101 deletion completed in 6.284541059s + +• [SLOW TEST:364.290 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:17:27.129: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-4364 +[It] Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-4364 +STEP: Creating statefulset with conflicting port in namespace statefulset-4364 +STEP: Waiting until pod test-pod will start running in namespace statefulset-4364 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4364 +Jun 6 16:17:31.321: INFO: Observed stateful pod in namespace: statefulset-4364, name: ss-0, uid: 9560c84b-8876-11e9-9995-4ad9032ea524, status phase: Pending. Waiting for statefulset controller to delete. +Jun 6 16:17:31.501: INFO: Observed stateful pod in namespace: statefulset-4364, name: ss-0, uid: 9560c84b-8876-11e9-9995-4ad9032ea524, status phase: Failed. Waiting for statefulset controller to delete. +Jun 6 16:17:31.513: INFO: Observed stateful pod in namespace: statefulset-4364, name: ss-0, uid: 9560c84b-8876-11e9-9995-4ad9032ea524, status phase: Failed. Waiting for statefulset controller to delete. +Jun 6 16:17:31.519: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4364 +STEP: Removing pod with conflicting port in namespace statefulset-4364 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4364 and will be in running state +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 6 16:17:35.587: INFO: Deleting all statefulset in ns statefulset-4364 +Jun 6 16:17:35.595: INFO: Scaling statefulset ss to 0 +Jun 6 16:17:55.644: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:17:55.655: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:17:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4364" for this suite. +Jun 6 16:18:01.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:18:01.979: INFO: namespace statefulset-4364 deletion completed in 6.278890279s + +• [SLOW TEST:34.850 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Should recreate evicted statefulset [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:18:01.983: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 6 16:18:02.138: INFO: Waiting up to 5m0s for pod "pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-6874" to be "success or failure" +Jun 6 16:18:02.144: INFO: Pod "pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197777ms +Jun 6 16:18:04.153: INFO: Pod "pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014532811s +Jun 6 16:18:06.161: INFO: Pod "pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023187547s +STEP: Saw pod success +Jun 6 16:18:06.161: INFO: Pod "pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:18:06.169: INFO: Trying to get logs from node cncf-2 pod pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:18:06.215: INFO: Waiting for pod pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:18:06.221: INFO: Pod pod-a836e579-8876-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:18:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6874" for this suite. +Jun 6 16:18:12.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:18:13.218: INFO: namespace emptydir-6874 deletion completed in 6.990598448s + +• [SLOW TEST:11.236 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:18:13.220: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-aee4b3dd-8876-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:18:13.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-8567" to be "success or failure" +Jun 6 16:18:13.364: INFO: Pod "pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324886ms +Jun 6 16:18:15.372: INFO: Pod "pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016247415s +Jun 6 16:18:17.380: INFO: Pod "pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024369109s +STEP: Saw pod success +Jun 6 16:18:17.380: INFO: Pod "pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:18:17.387: INFO: Trying to get logs from node cncf-2 pod pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: +STEP: delete the pod +Jun 6 16:18:17.434: INFO: Waiting for pod pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:18:17.440: INFO: Pod pod-projected-configmaps-aee66638-8876-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:18:17.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8567" for this suite. +Jun 6 16:18:23.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:18:23.707: INFO: namespace projected-8567 deletion completed in 6.259099845s + +• [SLOW TEST:10.487 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:18:23.708: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-b52270ce-8876-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:18:23.829: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-5767" to be "success or failure" +Jun 6 16:18:23.838: INFO: Pod "pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137071ms +Jun 6 16:18:25.845: INFO: Pod "pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015648011s +Jun 6 16:18:27.854: INFO: Pod "pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024623985s +STEP: Saw pod success +Jun 6 16:18:27.854: INFO: Pod "pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:18:27.861: INFO: Trying to get logs from node cncf-2 pod pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6 container projected-secret-volume-test: +STEP: delete the pod +Jun 6 16:18:27.903: INFO: Waiting for pod pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:18:27.909: INFO: Pod pod-projected-secrets-b5240c7f-8876-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:18:27.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5767" for this suite. +Jun 6 16:18:33.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:18:34.188: INFO: namespace projected-5767 deletion completed in 6.26424033s + +• [SLOW TEST:10.481 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:18:34.189: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Jun 6 16:18:34.335: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying the kubelet observed the termination notice +Jun 6 16:18:43.487: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed +STEP: verifying pod deletion was observed +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:18:43.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2491" for this suite. +Jun 6 16:18:49.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:18:49.789: INFO: namespace pods-2491 deletion completed in 6.286177263s + +• [SLOW TEST:15.600 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be submitted and removed [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:18:49.789: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Jun 6 16:18:49.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959340415,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 6 16:18:49.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959340415,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Jun 6 16:18:59.970: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959342250,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 6 16:18:59.970: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959342250,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Jun 6 16:19:09.991: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959344043,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 6 16:19:09.992: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959344043,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Jun 6 16:19:20.005: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959345843,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 6 16:19:20.006: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-a,UID:c4b57af7-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959345843,Generation:0,CreationTimestamp:2019-06-06 16:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Jun 6 16:19:31.238: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-b,UID:dc998ae2-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959347641,Generation:0,CreationTimestamp:2019-06-06 16:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 6 16:19:31.239: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-b,UID:dc998ae2-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959347641,Generation:0,CreationTimestamp:2019-06-06 16:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Jun 6 16:19:41.254: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-b,UID:dc998ae2-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959349479,Generation:0,CreationTimestamp:2019-06-06 16:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 6 16:19:41.254: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-12,SelfLink:/api/v1/namespaces/watch-12/configmaps/e2e-watch-test-configmap-b,UID:dc998ae2-8876-11e9-9995-4ad9032ea524,ResourceVersion:3959349479,Generation:0,CreationTimestamp:2019-06-06 16:19:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:19:51.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-12" for this suite. +Jun 6 16:19:57.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:19:59.264: INFO: namespace watch-12 deletion completed in 7.999151493s + +• [SLOW TEST:69.475 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:19:59.264: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 6 16:19:59.395: INFO: Waiting up to 5m0s for pod "pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-8918" to be "success or failure" +Jun 6 16:19:59.403: INFO: Pod "pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.839025ms +Jun 6 16:20:01.411: INFO: Pod "pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01602967s +Jun 6 16:20:03.419: INFO: Pod "pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024378196s +STEP: Saw pod success +Jun 6 16:20:03.419: INFO: Pod "pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:20:03.427: INFO: Trying to get logs from node cncf-2 pod pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:20:03.470: INFO: Waiting for pod pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:20:03.475: INFO: Pod pod-ee1a8ca7-8876-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:20:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8918" for this suite. +Jun 6 16:20:09.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:20:09.720: INFO: namespace emptydir-8918 deletion completed in 6.237033825s + +• [SLOW TEST:10.457 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:20:09.721: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-f4587954-8876-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Updating configmap projected-configmap-test-upd-f4587954-8876-11e9-b3bf-0e7bbe1a64f6 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:20:15.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8977" for this suite. +Jun 6 16:20:38.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:20:38.285: INFO: namespace projected-8977 deletion completed in 22.307814244s + +• [SLOW TEST:28.565 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:20:38.285: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Jun 6 16:20:48.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:48.474: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:48.690: INFO: Exec stderr: "" +Jun 6 16:20:48.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:48.690: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:48.898: INFO: Exec stderr: "" +Jun 6 16:20:48.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:48.898: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:49.103: INFO: Exec stderr: "" +Jun 6 16:20:49.104: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:49.104: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:49.307: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Jun 6 16:20:49.307: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:49.307: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:49.519: INFO: Exec stderr: "" +Jun 6 16:20:49.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:49.519: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:49.721: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Jun 6 16:20:49.721: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:49.721: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:50.002: INFO: Exec stderr: "" +Jun 6 16:20:50.002: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:50.002: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:50.231: INFO: Exec stderr: "" +Jun 6 16:20:50.231: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:50.231: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:50.445: INFO: Exec stderr: "" +Jun 6 16:20:50.445: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7550 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:20:50.445: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:20:50.698: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:20:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-7550" for this suite. +Jun 6 16:21:36.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:21:36.985: INFO: namespace e2e-kubelet-etc-hosts-7550 deletion completed in 46.274126088s + +• [SLOW TEST:58.700 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:21:36.985: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on node default medium +Jun 6 16:21:37.110: INFO: Waiting up to 5m0s for pod "pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-2373" to be "success or failure" +Jun 6 16:21:37.117: INFO: Pod "pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673062ms +Jun 6 16:21:39.125: INFO: Pod "pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014766035s +Jun 6 16:21:41.134: INFO: Pod "pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024006424s +STEP: Saw pod success +Jun 6 16:21:41.134: INFO: Pod "pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:21:41.142: INFO: Trying to get logs from node cncf-2 pod pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:21:41.182: INFO: Waiting for pod pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:21:41.188: INFO: Pod pod-28599c9a-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:21:41.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2373" for this suite. +Jun 6 16:21:47.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:21:47.458: INFO: namespace emptydir-2373 deletion completed in 6.261298143s + +• [SLOW TEST:10.473 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:21:47.459: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:21:47.576: INFO: Creating deployment "nginx-deployment" +Jun 6 16:21:47.590: INFO: Waiting for observed generation 1 +Jun 6 16:21:49.888: INFO: Waiting for all required pods to come up +Jun 6 16:21:49.906: INFO: Pod name nginx: Found 10 pods out of 10 +STEP: ensuring each pod is running +Jun 6 16:21:59.942: INFO: Waiting for deployment "nginx-deployment" to complete +Jun 6 16:21:59.956: INFO: Updating deployment "nginx-deployment" with a non-existent image +Jun 6 16:21:59.978: INFO: Updating deployment nginx-deployment +Jun 6 16:21:59.978: INFO: Waiting for observed generation 2 +Jun 6 16:22:02.012: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Jun 6 16:22:02.025: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Jun 6 16:22:02.034: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Jun 6 16:22:02.052: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Jun 6 16:22:02.052: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Jun 6 16:22:02.057: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas +Jun 6 16:22:02.066: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas +Jun 6 16:22:02.066: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 +Jun 6 16:22:02.085: INFO: Updating deployment nginx-deployment +Jun 6 16:22:02.085: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas +Jun 6 16:22:02.097: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Jun 6 16:22:02.103: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 6 16:22:04.124: INFO: Deployment "nginx-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-301,SelfLink:/apis/apps/v1/namespaces/deployment-301/deployments/nginx-deployment,UID:2e98ef97-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375164,Generation:3,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-06-06 16:22:02 +0000 UTC 2019-06-06 16:22:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-06 16:22:02 +0000 UTC 2019-06-06 16:21:47 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5f9595f595" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} + +Jun 6 16:22:04.135: INFO: New ReplicaSet "nginx-deployment-5f9595f595" of Deployment "nginx-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595,GenerateName:,Namespace:deployment-301,SelfLink:/apis/apps/v1/namespaces/deployment-301/replicasets/nginx-deployment-5f9595f595,UID:35fd3cd5-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375161,Generation:3,CreationTimestamp:2019-06-06 16:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2e98ef97-8877-11e9-9995-4ad9032ea524 0xc00083b717 0xc00083b718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 16:22:04.135: INFO: All old ReplicaSets of Deployment "nginx-deployment": +Jun 6 16:22:04.135: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8,GenerateName:,Namespace:deployment-301,SelfLink:/apis/apps/v1/namespaces/deployment-301/replicasets/nginx-deployment-6f478d8d8,UID:2e9b3b09-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375155,Generation:3,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2e98ef97-8877-11e9-9995-4ad9032ea524 0xc00083b7e7 0xc00083b7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} +Jun 6 16:22:04.148: INFO: Pod "nginx-deployment-5f9595f595-2jp65" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-2jp65,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-2jp65,UID:3746daab-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375317,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.146/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e137 0xc002b1e138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-54l6h" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-54l6h,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-54l6h,UID:36012d71-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374809,Generation:0,CreationTimestamp:2019-06-06 16:22:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.72/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e2f7 0xc002b1e2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e370} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-6qfkj" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-6qfkj,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-6qfkj,UID:374c8135-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375261,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e467 0xc002b1e468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e4f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-97crj" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-97crj,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-97crj,UID:3604e3c3-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374805,Generation:0,CreationTimestamp:2019-06-06 16:22:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.71/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e5f7 0xc002b1e5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-d4j8r" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-d4j8r,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-d4j8r,UID:3749b7e6-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375314,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e767 0xc002b1e768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-fqzmt" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-fqzmt,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-fqzmt,UID:3749d7e4-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375147,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1e8d7 0xc002b1e8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1e950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1e970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-g24k4" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-g24k4,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-g24k4,UID:3604e4b8-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374808,Generation:0,CreationTimestamp:2019-06-06 16:22:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.142/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1ea07 0xc002b1ea08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1ea80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1eaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.149: INFO: Pod "nginx-deployment-5f9595f595-mp6f2" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-mp6f2,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-mp6f2,UID:3747736a-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375229,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1eb77 0xc002b1eb78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1ebf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1ec10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.150: INFO: Pod "nginx-deployment-5f9595f595-mssth" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-mssth,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-mssth,UID:36194f9f-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374818,Generation:0,CreationTimestamp:2019-06-06 16:22:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.144/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1ecf7 0xc002b1ecf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1ed70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1ed90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.150: INFO: Pod "nginx-deployment-5f9595f595-nxfjl" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-nxfjl,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-nxfjl,UID:3749dc54-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375151,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1ee67 0xc002b1ee68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1eef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1ef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.150: INFO: Pod "nginx-deployment-5f9595f595-nzvpx" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-nzvpx,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-nzvpx,UID:3749ac35-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375315,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1efa7 0xc002b1efa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.150: INFO: Pod "nginx-deployment-5f9595f595-t7pkt" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-t7pkt,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-t7pkt,UID:3744aa37-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375203,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1f127 0xc002b1f128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.150: INFO: Pod "nginx-deployment-5f9595f595-vs9q6" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-vs9q6,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-5f9595f595-vs9q6,UID:361bc090-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374815,Generation:0,CreationTimestamp:2019-06-06 16:22:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.143/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 35fd3cd5-8877-11e9-9995-4ad9032ea524 0xc002b1f2e7 0xc002b1f2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:00 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-4chjg" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-4chjg,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-4chjg,UID:3742a8be-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375310,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.145/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1f467 0xc002b1f468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-5dwp6" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-5dwp6,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-5dwp6,UID:2ea58752-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374003,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.70/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1f5e7 0xc002b1f5e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.70,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f43381365e35a852e530db32d68ce1d4c197338f12cf04d347c0d559a22f28e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-6rzv8" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-6rzv8,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-6rzv8,UID:3747a83d-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375236,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1f747 0xc002b1f748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-7z7hs" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-7z7hs,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-7z7hs,UID:2ea41ca0-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373275,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.136/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1f8a7 0xc002b1f8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1f910} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1f930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:10.2.0.136,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0594f8921bc77f4a25a4885f1b58733ccd1150d776a0b003907cf5802d2598b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-9sw7v" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-9sw7v,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-9sw7v,UID:2e9d93c4-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373110,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.67/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1fa17 0xc002b1fa18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1fa80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1faa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.67,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3a4232a954f7681cb8e4b46452d6e0ad7838899d56fcb7ac38a0477890701bd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-c5x5f" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-c5x5f,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-c5x5f,UID:37478267-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375214,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1fb77 0xc002b1fb78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1fbe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1fc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.151: INFO: Pod "nginx-deployment-6f478d8d8-h57sx" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-h57sx,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-h57sx,UID:3744ddd4-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375213,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1fcc7 0xc002b1fcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1fd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1fd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-jk7nl" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-jk7nl,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-jk7nl,UID:374992d6-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375307,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1fe17 0xc002b1fe18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1fe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1fea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-l2jjq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-l2jjq,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-l2jjq,UID:3749314b-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375293,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc002b1ff67 0xc002b1ff68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b1ffd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b1fff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-l9fxl" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-l9fxl,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-l9fxl,UID:37475394-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375222,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa0b7 0xc0026aa0b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-mcgnf" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-mcgnf,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-mcgnf,UID:37495563-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375247,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa207 0xc0026aa208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa270} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-n9lln" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-n9lln,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-n9lln,UID:3747a133-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375257,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa357 0xc0026aa358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-nbjml" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-nbjml,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-nbjml,UID:2ea0baaa-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373544,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.68/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa4b7 0xc0026aa4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.68,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f92a6da570a2711a762c8c41d43ba22ec24831268a18eac75fab9ab9110887cd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-rct4j" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-rct4j,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-rct4j,UID:2ea0b49e-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373653,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.139/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa637 0xc0026aa638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:10.2.0.139,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://17aa4328f8a46851bb74f23e1d95c2a7f23de3946fde214eae001fa1da46bb98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-tjlwv" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-tjlwv,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-tjlwv,UID:3749abad-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375304,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa797 0xc0026aa798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-vqppv" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-vqppv,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-vqppv,UID:37496f77-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375290,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aa8f7 0xc0026aa8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aa960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aa980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.152: INFO: Pod "nginx-deployment-6f478d8d8-wf697" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-wf697,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-wf697,UID:2e9f1117-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373240,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.140/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aaa57 0xc0026aaa58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aaac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:10.2.0.140,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c9d8009c6de8afb59de3482fbacfba22e152ce38992b1de4bd815e28282323be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.153: INFO: Pod "nginx-deployment-6f478d8d8-wqx9b" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-wqx9b,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-wqx9b,UID:2ea0b2e2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959374027,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.138/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aabc7 0xc0026aabc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aac30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aac50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:10.2.0.138,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d6b2925e5aba5cc82d16bcbde2b7703825e54de46e850a26b3f2f764df4a85dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.153: INFO: Pod "nginx-deployment-6f478d8d8-zmnfp" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zmnfp,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-zmnfp,UID:2ea0da78-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959373647,Generation:0,CreationTimestamp:2019-06-06 16:21:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.69/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aad37 0xc0026aad38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aada0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aadc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:21:47 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.69,StartTime:2019-06-06 16:21:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-06 16:21:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://69c7fb19c464b0aea16235e25fcc20e11085ab8341178593ac62c23e9406f573}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 6 16:22:04.153: INFO: Pod "nginx-deployment-6f478d8d8-zspgb" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zspgb,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-301,SelfLink:/api/v1/namespaces/deployment-301/pods/nginx-deployment-6f478d8d8-zspgb,UID:374474fa-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959375319,Generation:0,CreationTimestamp:2019-06-06 16:22:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.0.147/32,},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 2e9b3b09-8877-11e9-9995-4ad9032ea524 0xc0026aaea7 0xc0026aaea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jjrsj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjrsj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jjrsj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aaf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:22:02 +0000 UTC }],Message:,Reason:,HostIP:51.68.41.114,PodIP:,StartTime:2019-06-06 16:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:22:04.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-301" for this suite. +Jun 6 16:22:12.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:22:12.522: INFO: namespace deployment-301 deletion completed in 8.346367463s + +• [SLOW TEST:25.063 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support proportional scaling [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected combined + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:22:12.523: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-projected-all-test-volume-3d889d28-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating secret with name secret-projected-all-test-volume-3d889cff-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test Check all projections for projected volume plugin +Jun 6 16:22:12.674: INFO: Waiting up to 5m0s for pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-5846" to be "success or failure" +Jun 6 16:22:12.683: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89974ms +Jun 6 16:22:14.695: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020729608s +Jun 6 16:22:16.702: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028434043s +Jun 6 16:22:18.784: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110332008s +Jun 6 16:22:20.837: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162744826s +STEP: Saw pod success +Jun 6 16:22:20.837: INFO: Pod "projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:22:20.851: INFO: Trying to get logs from node cncf-2 pod projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6 container projected-all-volume-test: +STEP: delete the pod +Jun 6 16:22:20.949: INFO: Waiting for pod projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:22:20.990: INFO: Pod projected-volume-3d889c95-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:22:20.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5846" for this suite. +Jun 6 16:22:27.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:22:27.295: INFO: namespace projected-5846 deletion completed in 6.286807822s + +• [SLOW TEST:14.773 seconds] +[sig-storage] Projected combined +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:22:27.297: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:22:31.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-7278" for this suite. +Jun 6 16:22:37.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:22:38.106: INFO: namespace emptydir-wrapper-7278 deletion completed in 6.257617356s + +• [SLOW TEST:10.810 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not conflict [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:22:38.107: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:22:38.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-5061" to be "success or failure" +Jun 6 16:22:38.225: INFO: Pod "downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.589288ms +Jun 6 16:22:40.234: INFO: Pod "downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016562946s +Jun 6 16:22:42.244: INFO: Pod "downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026431175s +STEP: Saw pod success +Jun 6 16:22:42.244: INFO: Pod "downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:22:42.252: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:22:42.285: INFO: Waiting for pod downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:22:42.291: INFO: Pod downwardapi-volume-4cc4cf1b-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:22:42.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5061" for this suite. +Jun 6 16:22:48.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:22:48.580: INFO: namespace projected-5061 deletion completed in 6.281374443s + +• [SLOW TEST:10.473 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:22:48.581: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Jun 6 16:22:48.755: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959383665,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 6 16:22:48.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959383669,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 6 16:22:48.755: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959383670,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Jun 6 16:22:58.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959385408,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 6 16:22:58.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959385431,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +Jun 6 16:22:58.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8794,SelfLink:/api/v1/namespaces/watch-8794/configmaps/e2e-watch-test-label-changed,UID:53085ea2-8877-11e9-9995-4ad9032ea524,ResourceVersion:3959385438,Generation:0,CreationTimestamp:2019-06-06 16:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:22:58.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8794" for this suite. +Jun 6 16:23:06.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:23:07.139: INFO: namespace watch-8794 deletion completed in 8.294631906s + +• [SLOW TEST:18.559 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:23:07.139: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 6 16:23:15.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:15.389: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:17.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:17.398: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:19.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:19.398: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:21.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:21.398: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:23.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:23.400: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:25.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:25.397: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:27.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:27.398: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:29.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:29.397: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:31.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:31.398: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:33.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:33.397: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 6 16:23:35.390: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 6 16:23:35.399: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:23:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2817" for this suite. +Jun 6 16:23:59.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:23:59.711: INFO: namespace container-lifecycle-hook-2817 deletion completed in 24.304523269s + +• [SLOW TEST:52.572 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:23:59.712: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:23:59.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-272" to be "success or failure" +Jun 6 16:23:59.878: INFO: Pod "downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.887219ms +Jun 6 16:24:01.887: INFO: Pod "downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019387123s +Jun 6 16:24:03.897: INFO: Pod "downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029289256s +STEP: Saw pod success +Jun 6 16:24:03.897: INFO: Pod "downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:24:03.903: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:24:03.942: INFO: Waiting for pod downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:24:03.949: INFO: Pod downwardapi-volume-7d6f71da-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:24:03.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-272" for this suite. +Jun 6 16:24:09.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:24:10.220: INFO: namespace downward-api-272 deletion completed in 6.262402203s + +• [SLOW TEST:10.508 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:24:10.220: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 6 16:24:10.342: INFO: Waiting up to 5m0s for pod "pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-4509" to be "success or failure" +Jun 6 16:24:10.349: INFO: Pod "pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.096311ms +Jun 6 16:24:12.359: INFO: Pod "pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016665456s +Jun 6 16:24:14.367: INFO: Pod "pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02517106s +STEP: Saw pod success +Jun 6 16:24:14.368: INFO: Pod "pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:24:14.375: INFO: Trying to get logs from node cncf-2 pod pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:24:14.409: INFO: Waiting for pod pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:24:14.416: INFO: Pod pod-83aeac79-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:24:14.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4509" for this suite. +Jun 6 16:24:20.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:24:20.679: INFO: namespace emptydir-4509 deletion completed in 6.255915185s + +• [SLOW TEST:10.459 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:24:20.680: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name s-test-opt-del-89ea9133-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating secret with name s-test-opt-upd-89ea91e7-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-89ea9133-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Updating secret s-test-opt-upd-89ea91e7-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating secret with name s-test-opt-create-89ea9204-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:24:31.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-734" for this suite. +Jun 6 16:24:53.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:24:53.311: INFO: namespace projected-734 deletion completed in 22.265960528s + +• [SLOW TEST:32.631 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:24:53.312: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:24:53.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-7818" for this suite. +Jun 6 16:24:59.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:24:59.718: INFO: namespace kubelet-test-7818 deletion completed in 6.271593811s + +• [SLOW TEST:6.407 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:24:59.719: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-a1317a5b-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:25:05.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3674" for this suite. +Jun 6 16:26:03.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:26:04.208: INFO: namespace configmap-3674 deletion completed in 58.271179056s + +• [SLOW TEST:64.489 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:26:04.210: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:26:04.349: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Jun 6 16:26:04.367: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:04.367: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Jun 6 16:26:04.406: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:04.406: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:05.413: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:05.413: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:06.414: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:06.414: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:07.415: INFO: Number of nodes with available pods: 1 +Jun 6 16:26:07.415: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Jun 6 16:26:07.477: INFO: Number of nodes with available pods: 1 +Jun 6 16:26:07.477: INFO: Number of running nodes: 0, number of available pods: 1 +Jun 6 16:26:08.484: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:08.484: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Jun 6 16:26:08.508: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:08.509: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:09.517: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:09.517: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:10.518: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:10.518: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:11.517: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:11.517: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:12.515: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:12.515: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:13.516: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:13.516: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:26:14.521: INFO: Number of nodes with available pods: 1 +Jun 6 16:26:14.521: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9776, will wait for the garbage collector to delete the pods +Jun 6 16:26:14.607: INFO: Deleting DaemonSet.extensions daemon-set took: 15.12525ms +Jun 6 16:26:15.008: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.557347ms +Jun 6 16:26:20.722: INFO: Number of nodes with available pods: 0 +Jun 6 16:26:20.722: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 6 16:26:20.729: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9776/daemonsets","resourceVersion":"3959419388"},"items":null} + +Jun 6 16:26:20.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9776/pods","resourceVersion":"3959419388"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:26:20.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-9776" for this suite. +Jun 6 16:26:26.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:26:27.056: INFO: namespace daemonsets-9776 deletion completed in 6.272383549s + +• [SLOW TEST:22.846 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop complex daemon [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:26:27.058: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-6761/configmap-test-d53f104d-8877-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:26:27.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-6761" to be "success or failure" +Jun 6 16:26:27.201: INFO: Pod "pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135377ms +Jun 6 16:26:29.210: INFO: Pod "pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014482987s +Jun 6 16:26:31.230: INFO: Pod "pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035042941s +STEP: Saw pod success +Jun 6 16:26:31.230: INFO: Pod "pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:26:31.241: INFO: Trying to get logs from node cncf-2 pod pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6 container env-test: +STEP: delete the pod +Jun 6 16:26:31.285: INFO: Waiting for pod pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:26:31.298: INFO: Pod pod-configmaps-d540e8e6-8877-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:26:31.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6761" for this suite. +Jun 6 16:26:37.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:26:37.551: INFO: namespace configmap-6761 deletion completed in 6.242231335s + +• [SLOW TEST:10.493 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:26:37.552: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 6 16:26:37.652: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:26:42.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2762" for this suite. +Jun 6 16:26:48.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:26:49.101: INFO: namespace init-container-2762 deletion completed in 6.306013427s + +• [SLOW TEST:11.550 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:26:49.103: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-2635 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-2635 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2635 +Jun 6 16:26:49.240: INFO: Found 0 stateful pods, waiting for 1 +Jun 6 16:26:59.248: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Jun 6 16:26:59.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:26:59.598: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:26:59.598: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:26:59.598: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:26:59.607: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jun 6 16:27:09.616: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:27:09.616: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:27:09.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999221s +Jun 6 16:27:10.661: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993537195s +Jun 6 16:27:11.669: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984033509s +Jun 6 16:27:12.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975256532s +Jun 6 16:27:13.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965335402s +Jun 6 16:27:14.701: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951831201s +Jun 6 16:27:15.710: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.943270418s +Jun 6 16:27:16.717: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.934780434s +Jun 6 16:27:17.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.92724633s +Jun 6 16:27:18.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 916.223495ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2635 +Jun 6 16:27:19.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:27:20.073: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 16:27:20.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:27:20.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:27:20.083: INFO: Found 1 stateful pods, waiting for 3 +Jun 6 16:27:30.093: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:27:30.093: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 16:27:30.093: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Jun 6 16:27:30.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:27:30.478: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:27:30.478: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:27:30.478: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:27:30.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:27:30.888: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:27:30.888: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:27:30.888: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:27:30.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 16:27:31.250: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 16:27:31.250: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 16:27:31.250: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 16:27:31.250: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:27:31.257: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Jun 6 16:27:41.273: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:27:41.273: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:27:41.273: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 6 16:27:41.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999265s +Jun 6 16:27:42.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98970047s +Jun 6 16:27:43.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979128159s +Jun 6 16:27:44.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969935588s +Jun 6 16:27:45.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959424227s +Jun 6 16:27:46.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948916883s +Jun 6 16:27:47.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940283587s +Jun 6 16:27:48.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.929304691s +Jun 6 16:27:49.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.900603903s +Jun 6 16:27:50.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 890.784646ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2635 +Jun 6 16:27:51.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:27:51.781: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 16:27:51.781: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:27:51.781: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:27:51.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:27:52.184: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 16:27:52.184: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 16:27:52.184: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 16:27:52.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:27:52.476: INFO: rc: 126 +Jun 6 16:27:52.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown + command terminated with exit code 126 + [] 0xc0024af7a0 exit status 126 true [0xc002cca0f8 0xc002cca130 0xc002cca148] [0xc002cca0f8 0xc002cca130 0xc002cca148] [0xc002cca118 0xc002cca140] [0x9c00a0 0x9c00a0] 0xc0023f8b40 }: +Command stdout: +cannot exec in a stopped state: unknown + +stderr: +command terminated with exit code 126 + +error: +exit status 126 + +Jun 6 16:28:02.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:02.595: INFO: rc: 1 +Jun 6 16:28:02.595: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024afb00 exit status 1 true [0xc002cca150 0xc002cca190 0xc002cca1d0] [0xc002cca150 0xc002cca190 0xc002cca1d0] [0xc002cca170 0xc002cca1b8] [0x9c00a0 0x9c00a0] 0xc0023f8f00 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:28:12.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:12.729: INFO: rc: 1 +Jun 6 16:28:12.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0020550b0 exit status 1 true [0xc0029c2070 0xc0029c2088 0xc0029c20a0] [0xc0029c2070 0xc0029c2088 0xc0029c20a0] [0xc0029c2080 0xc0029c2098] [0x9c00a0 0x9c00a0] 0xc0031f09c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:28:22.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:22.867: INFO: rc: 1 +Jun 6 16:28:22.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002055410 exit status 1 true [0xc0029c20a8 0xc0029c20c0 0xc0029c20d8] [0xc0029c20a8 0xc0029c20c0 0xc0029c20d8] [0xc0029c20b8 0xc0029c20d0] [0x9c00a0 0x9c00a0] 0xc0031f0d80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:28:32.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:32.993: INFO: rc: 1 +Jun 6 16:28:32.994: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024afec0 exit status 1 true [0xc002cca1d8 0xc002cca218 0xc002cca240] [0xc002cca1d8 0xc002cca218 0xc002cca240] [0xc002cca1f8 0xc002cca238] [0x9c00a0 0x9c00a0] 0xc0023f93e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:28:42.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:43.123: INFO: rc: 1 +Jun 6 16:28:43.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0020557a0 exit status 1 true [0xc0029c20e0 0xc0029c20f8 0xc0029c2110] [0xc0029c20e0 0xc0029c20f8 0xc0029c2110] [0xc0029c20f0 0xc0029c2108] [0x9c00a0 0x9c00a0] 0xc0031f11a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:28:53.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:28:53.268: INFO: rc: 1 +Jun 6 16:28:53.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002055b30 exit status 1 true [0xc0029c2118 0xc0029c2130 0xc0029c2148] [0xc0029c2118 0xc0029c2130 0xc0029c2148] [0xc0029c2128 0xc0029c2140] [0x9c00a0 0x9c00a0] 0xc0031f1620 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:03.391: INFO: rc: 1 +Jun 6 16:29:03.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009be330 exit status 1 true [0xc002cca248 0xc002cca288 0xc002cca2d8] [0xc002cca248 0xc002cca288 0xc002cca2d8] [0xc002cca280 0xc002cca2c0] [0x9c00a0 0x9c00a0] 0xc0023f9740 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:13.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:13.519: INFO: rc: 1 +Jun 6 16:29:13.519: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009be6c0 exit status 1 true [0xc002cca2e0 0xc002cca2f8 0xc002cca338] [0xc002cca2e0 0xc002cca2f8 0xc002cca338] [0xc002cca2f0 0xc002cca318] [0x9c00a0 0x9c00a0] 0xc0023f9b60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:23.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:23.651: INFO: rc: 1 +Jun 6 16:29:23.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bea50 exit status 1 true [0xc002cca358 0xc002cca380 0xc002cca398] [0xc002cca358 0xc002cca380 0xc002cca398] [0xc002cca378 0xc002cca390] [0x9c00a0 0x9c00a0] 0xc0023f9ec0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:33.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:33.787: INFO: rc: 1 +Jun 6 16:29:33.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bf0b0 exit status 1 true [0xc002cca3a0 0xc002cca3c0 0xc002cca3f8] [0xc002cca3a0 0xc002cca3c0 0xc002cca3f8] [0xc002cca3b8 0xc002cca3e8] [0x9c00a0 0x9c00a0] 0xc001bdcba0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:43.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:43.919: INFO: rc: 1 +Jun 6 16:29:43.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bf6b0 exit status 1 true [0xc002cca418 0xc002cca458 0xc002cca470] [0xc002cca418 0xc002cca458 0xc002cca470] [0xc002cca450 0xc002cca468] [0x9c00a0 0x9c00a0] 0xc001bdd200 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:29:53.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:29:54.045: INFO: rc: 1 +Jun 6 16:29:54.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024ae390 exit status 1 true [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca050 0xc002cca068] [0x9c00a0 0x9c00a0] 0xc0023f83c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:04.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:04.155: INFO: rc: 1 +Jun 6 16:30:04.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024ae720 exit status 1 true [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca088 0xc002cca0c8] [0x9c00a0 0x9c00a0] 0xc0023f8780 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:14.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:14.285: INFO: rc: 1 +Jun 6 16:30:14.285: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009be450 exit status 1 true [0xc0029c2010 0xc0029c2028 0xc0029c2040] [0xc0029c2010 0xc0029c2028 0xc0029c2040] [0xc0029c2020 0xc0029c2038] [0x9c00a0 0x9c00a0] 0xc001bdcd80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:24.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:24.405: INFO: rc: 1 +Jun 6 16:30:24.406: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024aea80 exit status 1 true [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0f8 0xc002cca130] [0x9c00a0 0x9c00a0] 0xc0023f8c60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:34.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:34.527: INFO: rc: 1 +Jun 6 16:30:34.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024af290 exit status 1 true [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca150 0xc002cca190] [0x9c00a0 0x9c00a0] 0xc0023f8fc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:44.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:44.652: INFO: rc: 1 +Jun 6 16:30:44.652: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009be810 exit status 1 true [0xc0029c2048 0xc0029c2060 0xc0029c2078] [0xc0029c2048 0xc0029c2060 0xc0029c2078] [0xc0029c2058 0xc0029c2070] [0x9c00a0 0x9c00a0] 0xc001bdd260 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:30:54.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:30:54.768: INFO: rc: 1 +Jun 6 16:30:54.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024af650 exit status 1 true [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1d8 0xc002cca218] [0x9c00a0 0x9c00a0] 0xc0023f94a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:04.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:04.888: INFO: rc: 1 +Jun 6 16:31:04.888: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bebd0 exit status 1 true [0xc0029c2080 0xc0029c2098 0xc0029c20b0] [0xc0029c2080 0xc0029c2098 0xc0029c20b0] [0xc0029c2090 0xc0029c20a8] [0x9c00a0 0x9c00a0] 0xc001bdd860 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:14.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:15.043: INFO: rc: 1 +Jun 6 16:31:15.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bf4a0 exit status 1 true [0xc0029c20b8 0xc0029c20d0 0xc0029c20e8] [0xc0029c20b8 0xc0029c20d0 0xc0029c20e8] [0xc0029c20c8 0xc0029c20e0] [0x9c00a0 0x9c00a0] 0xc001bddc20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:25.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:25.168: INFO: rc: 1 +Jun 6 16:31:25.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024afa40 exit status 1 true [0xc002cca238 0xc002cca268 0xc002cca2a0] [0xc002cca238 0xc002cca268 0xc002cca2a0] [0xc002cca248 0xc002cca288] [0x9c00a0 0x9c00a0] 0xc0023f98c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:35.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:35.293: INFO: rc: 1 +Jun 6 16:31:35.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bf890 exit status 1 true [0xc0029c20f0 0xc0029c2108 0xc0029c2120] [0xc0029c20f0 0xc0029c2108 0xc0029c2120] [0xc0029c2100 0xc0029c2118] [0x9c00a0 0x9c00a0] 0xc0031f0000 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:45.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:45.427: INFO: rc: 1 +Jun 6 16:31:45.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009bfc80 exit status 1 true [0xc0029c2128 0xc0029c2140 0xc0029c2158] [0xc0029c2128 0xc0029c2140 0xc0029c2158] [0xc0029c2138 0xc0029c2150] [0x9c00a0 0x9c00a0] 0xc0031f0420 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:31:55.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:31:55.577: INFO: rc: 1 +Jun 6 16:31:55.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024ae3c0 exit status 1 true [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca028 0xc002cca058 0xc002cca070] [0xc002cca050 0xc002cca068] [0x9c00a0 0x9c00a0] 0xc001bdcd80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:05.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:05.711: INFO: rc: 1 +Jun 6 16:32:05.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009be420 exit status 1 true [0xc0029c2010 0xc0029c2028 0xc0029c2040] [0xc0029c2010 0xc0029c2028 0xc0029c2040] [0xc0029c2020 0xc0029c2038] [0x9c00a0 0x9c00a0] 0xc0023f83c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:15.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:15.841: INFO: rc: 1 +Jun 6 16:32:15.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024ae750 exit status 1 true [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca078 0xc002cca090 0xc002cca0e0] [0xc002cca088 0xc002cca0c8] [0x9c00a0 0x9c00a0] 0xc001bdd260 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:25.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:25.980: INFO: rc: 1 +Jun 6 16:32:25.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024aeae0 exit status 1 true [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0e8 0xc002cca100 0xc002cca138] [0xc002cca0f8 0xc002cca130] [0x9c00a0 0x9c00a0] 0xc001bdd860 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:36.102: INFO: rc: 1 +Jun 6 16:32:36.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024af320 exit status 1 true [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca140 0xc002cca158 0xc002cca1a8] [0xc002cca150 0xc002cca190] [0x9c00a0 0x9c00a0] 0xc001bddc20 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:46.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:46.238: INFO: rc: 1 +Jun 6 16:32:46.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0024af6e0 exit status 1 true [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1b8 0xc002cca1e0 0xc002cca230] [0xc002cca1d8 0xc002cca218] [0x9c00a0 0x9c00a0] 0xc0031f0000 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 6 16:32:56.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-2635 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 16:32:56.372: INFO: rc: 1 +Jun 6 16:32:56.372: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: +Jun 6 16:32:56.372: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 6 16:32:56.402: INFO: Deleting all statefulset in ns statefulset-2635 +Jun 6 16:32:56.409: INFO: Scaling statefulset ss to 0 +Jun 6 16:32:56.432: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 16:32:56.439: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:32:56.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2635" for this suite. +Jun 6 16:33:02.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:33:02.716: INFO: namespace statefulset-2635 deletion completed in 6.243466653s + +• [SLOW TEST:373.613 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:33:02.717: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:33:02.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-6948" to be "success or failure" +Jun 6 16:33:02.843: INFO: Pod "downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.31022ms +Jun 6 16:33:04.853: INFO: Pod "downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017662908s +Jun 6 16:33:06.861: INFO: Pod "downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025714263s +STEP: Saw pod success +Jun 6 16:33:06.861: INFO: Pod "downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:33:06.867: INFO: Trying to get logs from node cncf-1 pod downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:33:06.916: INFO: Waiting for pod downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:33:06.933: INFO: Pod downwardapi-volume-c1129fc2-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:33:06.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6948" for this suite. +Jun 6 16:33:12.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:33:13.227: INFO: namespace downward-api-6948 deletion completed in 6.287898747s + +• [SLOW TEST:10.511 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:33:13.239: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-c758dae9-8878-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:33:13.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-4171" to be "success or failure" +Jun 6 16:33:13.382: INFO: Pod "pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.474113ms +Jun 6 16:33:15.391: INFO: Pod "pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018363484s +Jun 6 16:33:17.399: INFO: Pod "pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026748953s +STEP: Saw pod success +Jun 6 16:33:17.399: INFO: Pod "pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:33:17.406: INFO: Trying to get logs from node cncf-2 pod pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:33:17.449: INFO: Waiting for pod pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:33:17.455: INFO: Pod pod-configmaps-c75a845d-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:33:17.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4171" for this suite. +Jun 6 16:33:23.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:33:23.721: INFO: namespace configmap-4171 deletion completed in 6.257214958s + +• [SLOW TEST:10.485 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:33:23.722: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 6 16:33:23.829: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 6 16:33:23.847: INFO: Waiting for terminating namespaces to be deleted... +Jun 6 16:33:23.852: INFO: +Logging pods the kubelet thinks is on node cncf-1 before test +Jun 6 16:33:23.864: INFO: wormhole-fr2gz from kube-system started at 2019-06-06 09:17:01 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container wormhole ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: sonobuoy-e2e-job-f3c5b85dde7b4d05 from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container e2e ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-06 15:39:26 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: canal-xbh5x from kube-system started at 2019-06-06 09:16:42 +0000 UTC (2 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container calico-node ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 16:33:23.864: INFO: kube-proxy-v8kjv from kube-system started at 2019-06-06 09:16:42 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: kube-dns-868d878686-p5pfd from kube-system started at 2019-06-06 15:38:23 +0000 UTC (3 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: Container kubedns ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: Container sidecar ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-nczqr from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:33:23.864: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 16:33:23.864: INFO: +Logging pods the kubelet thinks is on node cncf-2 before test +Jun 6 16:33:23.885: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-z92mc from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: wormhole-bsxhq from kube-system started at 2019-06-06 09:16:11 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container wormhole ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: kube-dns-autoscaler-6bfccfcbd4-cqwj2 from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container autoscaler ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: kube-dns-868d878686-vl7zl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (3 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: Container kubedns ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: Container sidecar ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: kube-proxy-7cmmv from kube-system started at 2019-06-06 09:15:52 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: canal-t4msm from kube-system started at 2019-06-06 09:15:52 +0000 UTC (2 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container calico-node ready: true, restart count 0 +Jun 6 16:33:23.885: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 16:33:23.885: INFO: metrics-server-7c89fd4f7b-bnwzl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 16:33:23.885: INFO: Container metrics-server ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-d00bed13-8878-11e9-b3bf-0e7bbe1a64f6 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-d00bed13-8878-11e9-b3bf-0e7bbe1a64f6 off the node cncf-1 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-d00bed13-8878-11e9-b3bf-0e7bbe1a64f6 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:33:32.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-6989" for this suite. +Jun 6 16:33:40.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:33:40.341: INFO: namespace sched-pred-6989 deletion completed in 8.25687705s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:16.619 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:33:40.341: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:33:40.494: INFO: Create a RollingUpdate DaemonSet +Jun 6 16:33:40.511: INFO: Check that daemon pods launch on every node of the cluster +Jun 6 16:33:40.530: INFO: Number of nodes with available pods: 0 +Jun 6 16:33:40.530: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:33:41.548: INFO: Number of nodes with available pods: 0 +Jun 6 16:33:41.548: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:33:42.545: INFO: Number of nodes with available pods: 0 +Jun 6 16:33:42.545: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:33:43.922: INFO: Number of nodes with available pods: 2 +Jun 6 16:33:43.923: INFO: Number of running nodes: 2, number of available pods: 2 +Jun 6 16:33:43.923: INFO: Update the DaemonSet to trigger a rollout +Jun 6 16:33:43.943: INFO: Updating DaemonSet daemon-set +Jun 6 16:33:50.968: INFO: Roll back the DaemonSet before rollout is complete +Jun 6 16:33:50.990: INFO: Updating DaemonSet daemon-set +Jun 6 16:33:50.990: INFO: Make sure DaemonSet rollback is complete +Jun 6 16:33:51.000: INFO: Wrong image for pod: daemon-set-4zs9x. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. +Jun 6 16:33:51.000: INFO: Pod daemon-set-4zs9x is not available +Jun 6 16:33:52.025: INFO: Wrong image for pod: daemon-set-4zs9x. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. +Jun 6 16:33:52.025: INFO: Pod daemon-set-4zs9x is not available +Jun 6 16:33:53.021: INFO: Wrong image for pod: daemon-set-4zs9x. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. +Jun 6 16:33:53.021: INFO: Pod daemon-set-4zs9x is not available +Jun 6 16:33:54.020: INFO: Pod daemon-set-9h8jj is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4164, will wait for the garbage collector to delete the pods +Jun 6 16:33:54.117: INFO: Deleting DaemonSet.extensions daemon-set took: 16.864632ms +Jun 6 16:33:54.517: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.388051ms +Jun 6 16:33:56.925: INFO: Number of nodes with available pods: 0 +Jun 6 16:33:56.925: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 6 16:33:56.934: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4164/daemonsets","resourceVersion":"3959502634"},"items":null} + +Jun 6 16:33:56.942: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4164/pods","resourceVersion":"3959502635"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:33:56.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4164" for this suite. +Jun 6 16:34:02.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:03.226: INFO: namespace daemonsets-4164 deletion completed in 6.252253435s + +• [SLOW TEST:22.885 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should rollback without unnecessary restarts [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:03.230: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:34:03.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-9264" to be "success or failure" +Jun 6 16:34:03.422: INFO: Pod "downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.753903ms +Jun 6 16:34:05.430: INFO: Pod "downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020181631s +Jun 6 16:34:07.438: INFO: Pod "downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027811211s +STEP: Saw pod success +Jun 6 16:34:07.438: INFO: Pod "downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:34:07.445: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:34:07.482: INFO: Waiting for pod downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:34:07.487: INFO: Pod downwardapi-volume-e52afa5d-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:07.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9264" for this suite. +Jun 6 16:34:13.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:13.755: INFO: namespace projected-9264 deletion completed in 6.261126655s + +• [SLOW TEST:10.525 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:13.756: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-eb691b08-8878-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:34:13.880: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-9256" to be "success or failure" +Jun 6 16:34:13.890: INFO: Pod "pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107536ms +Jun 6 16:34:15.898: INFO: Pod "pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018484252s +Jun 6 16:34:17.908: INFO: Pod "pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028378771s +STEP: Saw pod success +Jun 6 16:34:17.909: INFO: Pod "pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:34:17.918: INFO: Trying to get logs from node cncf-1 pod pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6 container projected-secret-volume-test: +STEP: delete the pod +Jun 6 16:34:17.961: INFO: Waiting for pod pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:34:17.970: INFO: Pod pod-projected-secrets-eb6acba5-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:17.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9256" for this suite. +Jun 6 16:34:24.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:24.241: INFO: namespace projected-9256 deletion completed in 6.26119628s + +• [SLOW TEST:10.486 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] HostPath + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:24.242: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename hostpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 +[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test hostPath mode +Jun 6 16:34:24.362: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2029" to be "success or failure" +Jun 6 16:34:24.369: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817714ms +Jun 6 16:34:26.378: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015593488s +Jun 6 16:34:28.386: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023074996s +STEP: Saw pod success +Jun 6 16:34:28.386: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" +Jun 6 16:34:28.392: INFO: Trying to get logs from node cncf-2 pod pod-host-path-test container test-container-1: +STEP: delete the pod +Jun 6 16:34:28.426: INFO: Waiting for pod pod-host-path-test to disappear +Jun 6 16:34:28.432: INFO: Pod pod-host-path-test no longer exists +[AfterEach] [sig-storage] HostPath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:28.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostpath-2029" for this suite. +Jun 6 16:34:34.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:34.702: INFO: namespace hostpath-2029 deletion completed in 6.260950443s + +• [SLOW TEST:10.460 seconds] +[sig-storage] HostPath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:34.703: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 6 16:34:34.817: INFO: Waiting up to 5m0s for pod "pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-4806" to be "success or failure" +Jun 6 16:34:34.823: INFO: Pod "pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.988907ms +Jun 6 16:34:36.834: INFO: Pod "pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016576388s +Jun 6 16:34:38.843: INFO: Pod "pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025246745s +STEP: Saw pod success +Jun 6 16:34:38.843: INFO: Pod "pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:34:38.850: INFO: Trying to get logs from node cncf-1 pod pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:34:38.886: INFO: Waiting for pod pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:34:38.893: INFO: Pod pod-f7e5d970-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:38.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4806" for this suite. +Jun 6 16:34:44.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:45.164: INFO: namespace emptydir-4806 deletion completed in 6.261938832s + +• [SLOW TEST:10.461 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:45.165: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 6 16:34:45.275: INFO: Waiting up to 5m0s for pod "pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-4662" to be "success or failure" +Jun 6 16:34:45.282: INFO: Pod "pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.100779ms +Jun 6 16:34:47.291: INFO: Pod "pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015900125s +Jun 6 16:34:49.299: INFO: Pod "pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024461032s +STEP: Saw pod success +Jun 6 16:34:49.299: INFO: Pod "pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:34:49.305: INFO: Trying to get logs from node cncf-2 pod pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:34:49.337: INFO: Waiting for pod pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:34:49.343: INFO: Pod pod-fe222832-8878-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4662" for this suite. +Jun 6 16:34:55.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:34:55.622: INFO: namespace emptydir-4662 deletion completed in 6.272224713s + +• [SLOW TEST:10.457 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:34:55.622: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override command +Jun 6 16:34:55.742: INFO: Waiting up to 5m0s for pod "client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6" in namespace "containers-7742" to be "success or failure" +Jun 6 16:34:55.748: INFO: Pod "client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.842996ms +Jun 6 16:34:57.759: INFO: Pod "client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017315502s +Jun 6 16:34:59.769: INFO: Pod "client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027241695s +STEP: Saw pod success +Jun 6 16:34:59.769: INFO: Pod "client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:34:59.776: INFO: Trying to get logs from node cncf-1 pod client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:34:59.825: INFO: Waiting for pod client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:34:59.831: INFO: Pod client-containers-045f081d-8879-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:34:59.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7742" for this suite. +Jun 6 16:35:05.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:35:06.081: INFO: namespace containers-7742 deletion completed in 6.238537688s + +• [SLOW TEST:10.459 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:35:06.083: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +STEP: reading a file in the container +Jun 6 16:35:10.758: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8136 pod-service-account-0aecbc04-8879-11e9-b3bf-0e7bbe1a64f6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Jun 6 16:35:11.121: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8136 pod-service-account-0aecbc04-8879-11e9-b3bf-0e7bbe1a64f6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Jun 6 16:35:11.474: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8136 pod-service-account-0aecbc04-8879-11e9-b3bf-0e7bbe1a64f6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:35:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8136" for this suite. +Jun 6 16:35:17.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:35:18.113: INFO: namespace svcaccounts-8136 deletion completed in 6.272002118s + +• [SLOW TEST:12.030 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:35:18.116: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Jun 6 16:35:22.270: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-11c7dc7d-8879-11e9-b3bf-0e7bbe1a64f6,GenerateName:,Namespace:events-1929,SelfLink:/api/v1/namespaces/events-1929/pods/send-events-11c7dc7d-8879-11e9-b3bf-0e7bbe1a64f6,UID:11c86209-8879-11e9-9995-4ad9032ea524,ResourceVersion:3959517984,Generation:0,CreationTimestamp:2019-06-06 16:35:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 224200083,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.95/32,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m2gnk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m2gnk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-m2gnk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e900c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e900e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:35:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:35:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:35:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:35:18 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.95,StartTime:2019-06-06 16:35:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-06 16:35:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://3d1e578998d48b0ba181b3b6391433496bd08e3e461ccf6b67b21c0f916e25e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} + +STEP: checking for scheduler event about the pod +Jun 6 16:35:24.278: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Jun 6 16:35:26.287: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:35:26.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1929" for this suite. +Jun 6 16:36:02.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:36:02.601: INFO: namespace events-1929 deletion completed in 36.277170159s + +• [SLOW TEST:44.485 seconds] +[k8s.io] [sig-node] Events +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Guestbook application + should create and stop a working application [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:36:02.601: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create and stop a working application [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating all guestbook components +Jun 6 16:36:02.712: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-slave + labels: + app: redis + role: slave + tier: backend +spec: + ports: + - port: 6379 + selector: + app: redis + role: slave + tier: backend + +Jun 6 16:36:02.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:03.170: INFO: stderr: "" +Jun 6 16:36:03.170: INFO: stdout: "service/redis-slave created\n" +Jun 6 16:36:03.170: INFO: apiVersion: v1 +kind: Service +metadata: + name: redis-master + labels: + app: redis + role: master + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: master + tier: backend + +Jun 6 16:36:03.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:03.426: INFO: stderr: "" +Jun 6 16:36:03.426: INFO: stdout: "service/redis-master created\n" +Jun 6 16:36:03.426: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Jun 6 16:36:03.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:03.690: INFO: stderr: "" +Jun 6 16:36:03.690: INFO: stdout: "service/frontend created\n" +Jun 6 16:36:03.691: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google-samples/gb-frontend:v6 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access environment variables to find service host + # info, comment out the 'value: dns' line above, and uncomment the + # line below: + # value: env + ports: + - containerPort: 80 + +Jun 6 16:36:03.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:03.945: INFO: stderr: "" +Jun 6 16:36:03.945: INFO: stdout: "deployment.apps/frontend created\n" +Jun 6 16:36:03.945: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-master +spec: + replicas: 1 + selector: + matchLabels: + app: redis + role: master + tier: backend + template: + metadata: + labels: + app: redis + role: master + tier: backend + spec: + containers: + - name: master + image: gcr.io/kubernetes-e2e-test-images/redis:1.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jun 6 16:36:03.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:04.183: INFO: stderr: "" +Jun 6 16:36:04.183: INFO: stdout: "deployment.apps/redis-master created\n" +Jun 6 16:36:04.184: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-slave +spec: + replicas: 2 + selector: + matchLabels: + app: redis + role: slave + tier: backend + template: + metadata: + labels: + app: redis + role: slave + tier: backend + spec: + containers: + - name: slave + image: gcr.io/google-samples/gb-redisslave:v3 + resources: + requests: + cpu: 100m + memory: 100Mi + env: + - name: GET_HOSTS_FROM + value: dns + # If your cluster config does not include a dns service, then to + # instead access an environment variable to find the master + # service's host, comment out the 'value: dns' line above, and + # uncomment the line below: + # value: env + ports: + - containerPort: 6379 + +Jun 6 16:36:04.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3872' +Jun 6 16:36:04.454: INFO: stderr: "" +Jun 6 16:36:04.454: INFO: stdout: "deployment.apps/redis-slave created\n" +STEP: validating guestbook app +Jun 6 16:36:04.454: INFO: Waiting for all frontend pods to be Running. +Jun 6 16:36:09.505: INFO: Waiting for frontend to serve content. +Jun 6 16:36:09.543: INFO: Trying to add a new entry to the guestbook. +Jun 6 16:36:09.582: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Jun 6 16:36:09.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:09.779: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:09.779: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Jun 6 16:36:09.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:09.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:09.939: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 6 16:36:09.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:10.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:10.117: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 6 16:36:10.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:10.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:10.250: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 6 16:36:10.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:10.390: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:10.390: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 6 16:36:10.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3872' +Jun 6 16:36:10.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:36:10.532: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:36:10.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3872" for this suite. +Jun 6 16:36:52.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:36:52.796: INFO: namespace kubectl-3872 deletion completed in 42.257313033s + +• [SLOW TEST:50.195 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Guestbook application + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create and stop a working application [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:36:52.797: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:36:52.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 version --client' +Jun 6 16:36:52.981: INFO: stderr: "" +Jun 6 16:36:52.981: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.2\", GitCommit:\"66049e3b21efe110454d67df4fa62b08ea79a19b\", GitTreeState:\"clean\", BuildDate:\"2019-05-16T16:23:09Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +Jun 6 16:36:52.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-738' +Jun 6 16:36:53.276: INFO: stderr: "" +Jun 6 16:36:53.276: INFO: stdout: "replicationcontroller/redis-master created\n" +Jun 6 16:36:53.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-738' +Jun 6 16:36:53.526: INFO: stderr: "" +Jun 6 16:36:53.527: INFO: stdout: "service/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 6 16:36:54.544: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:36:54.544: INFO: Found 0 / 1 +Jun 6 16:36:55.535: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:36:55.535: INFO: Found 0 / 1 +Jun 6 16:36:56.539: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:36:56.539: INFO: Found 1 / 1 +Jun 6 16:36:56.539: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 6 16:36:56.548: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:36:56.549: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 6 16:36:56.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 describe pod redis-master-zl584 --namespace=kubectl-738' +Jun 6 16:36:56.710: INFO: stderr: "" +Jun 6 16:36:56.710: INFO: stdout: "Name: redis-master-zl584\nNamespace: kubectl-738\nPriority: 0\nPriorityClassName: \nNode: cncf-2/51.68.41.114\nStart Time: Thu, 06 Jun 2019 16:36:53 +0000\nLabels: app=redis\n role=master\nAnnotations: cni.projectcalico.org/podIP: 10.2.0.176/32\nStatus: Running\nIP: 10.2.0.176\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://727bda516840ece8c6e57713a2c001d6426144a85f81af1304ab3bb934d0bd28\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 06 Jun 2019 16:36:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-b7kj2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-b7kj2:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-b7kj2\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-738/redis-master-zl584 to cncf-2\n Normal Pulling 2s kubelet, cncf-2 Pulling image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\"\n Normal Pulled 1s kubelet, cncf-2 Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\"\n Normal Created 1s kubelet, cncf-2 Created container redis-master\n Normal Started 1s kubelet, cncf-2 Started container redis-master\n" +Jun 6 16:36:56.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 describe rc redis-master --namespace=kubectl-738' +Jun 6 16:36:56.909: INFO: stderr: "" +Jun 6 16:36:56.909: INFO: stdout: "Name: redis-master\nNamespace: kubectl-738\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-zl584\n" +Jun 6 16:36:56.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 describe service redis-master --namespace=kubectl-738' +Jun 6 16:36:57.062: INFO: stderr: "" +Jun 6 16:36:57.062: INFO: stdout: "Name: redis-master\nNamespace: kubectl-738\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.3.240.206\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.2.0.176:6379\nSession Affinity: None\nEvents: \n" +Jun 6 16:36:57.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 describe node cncf-1' +Jun 6 16:36:57.275: INFO: stderr: "" +Jun 6 16:36:57.275: INFO: stdout: "Name: cncf-1\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=0ff9e048-50af-4b2e-bc61-72611d23fca7\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=GRA5\n failure-domain.beta.kubernetes.io/zone=nova\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=cncf-1\n kubernetes.io/os=linux\nAnnotations: flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"16:88:88:40:45:13\"}\n flannel.alpha.coreos.com/backend-type: vxlan\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 51.68.79.184\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4IPIPTunnelAddr: 10.2.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 06 Jun 2019 09:16:41 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 06 Jun 2019 16:36:25 +0000 Thu, 06 Jun 2019 09:16:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 06 Jun 2019 16:36:25 +0000 Thu, 06 Jun 2019 09:16:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 06 Jun 2019 16:36:25 +0000 Thu, 06 Jun 2019 09:16:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 06 Jun 2019 16:36:25 +0000 Thu, 06 Jun 2019 09:17:01 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 51.68.79.184\n Hostname: cncf-1\nCapacity:\n attachable-volumes-cinder: 256\n cpu: 2\n ephemeral-storage: 48375392Ki\n hugepages-2Mi: 0\n memory: 6968948Ki\n pods: 110\nAllocatable:\n attachable-volumes-cinder: 256\n cpu: 1900m\n ephemeral-storage: 48375392Ki\n hugepages-2Mi: 0\n memory: 5408372Ki\n pods: 110\nSystem Info:\n Machine ID: 2a9e99d566574dc4b7b55cd7dcbc3a09\n System UUID: 2A9E99D5-6657-4DC4-B7B5-5CD7DCBC3A09\n Boot ID: 83ea49f5-058a-471a-ae15-f060810d42bc\n Kernel Version: 4.14.96-coreos-r1\n OS Image: Container Linux by CoreOS 1967.6.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.1\n Kubelet Version: v1.14.2\n Kube-Proxy Version: v1.14.2\nPodCIDR: 10.2.1.0/24\nProviderID: openstack:///2a9e99d5-6657-4dc4-b7b5-5cd7dcbc3a09\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n heptio-sonobuoy sonobuoy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57m\n heptio-sonobuoy sonobuoy-e2e-job-f3c5b85dde7b4d05 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57m\n heptio-sonobuoy sonobuoy-systemd-logs-daemon-set-100b28c213194052-nczqr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57m\n kube-system canal-xbh5x 250m (13%) 0 (0%) 0 (0%) 0 (0%) 7h20m\n kube-system kube-dns-868d878686-p5pfd 260m (13%) 0 (0%) 110Mi (2%) 170Mi (3%) 58m\n kube-system kube-proxy-v8kjv 100m (5%) 0 (0%) 200Mi (3%) 200Mi (3%) 7h20m\n kube-system wormhole-fr2gz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7h19m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 610m (32%) 0 (0%)\n memory 310Mi (5%) 370Mi (7%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-cinder 0 0\nEvents: \n" +Jun 6 16:36:57.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 describe namespace kubectl-738' +Jun 6 16:36:57.464: INFO: stderr: "" +Jun 6 16:36:57.464: INFO: stdout: "Name: kubectl-738\nLabels: e2e-framework=kubectl\n e2e-run=46805cbb-8871-11e9-b3bf-0e7bbe1a64f6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:36:57.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-738" for this suite. +Jun 6 16:37:19.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:37:19.737: INFO: namespace kubectl-738 deletion completed in 22.265059226s + +• [SLOW TEST:26.940 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl describe + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:37:19.738: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 6 16:37:19.866: INFO: Waiting up to 5m0s for pod "downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-4054" to be "success or failure" +Jun 6 16:37:19.879: INFO: Pod "downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.699036ms +Jun 6 16:37:21.887: INFO: Pod "downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021234106s +Jun 6 16:37:23.896: INFO: Pod "downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030271971s +STEP: Saw pod success +Jun 6 16:37:23.896: INFO: Pod "downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:37:23.906: INFO: Trying to get logs from node cncf-1 pod downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 16:37:23.950: INFO: Waiting for pod downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:37:23.957: INFO: Pod downward-api-5a465b41-8879-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:37:23.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4054" for this suite. +Jun 6 16:37:31.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:37:31.254: INFO: namespace downward-api-4054 deletion completed in 7.289181613s + +• [SLOW TEST:11.516 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:37:31.255: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:37:31.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-7617" to be "success or failure" +Jun 6 16:37:31.366: INFO: Pod "downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488738ms +Jun 6 16:37:33.374: INFO: Pod "downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012898728s +STEP: Saw pod success +Jun 6 16:37:33.374: INFO: Pod "downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:37:33.380: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:37:33.420: INFO: Waiting for pod downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:37:33.426: INFO: Pod downwardapi-volume-6120f838-8879-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:37:33.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7617" for this suite. +Jun 6 16:37:39.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:37:39.704: INFO: namespace projected-7617 deletion completed in 6.271743305s + +• [SLOW TEST:8.449 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:37:39.705: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-4523 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 6 16:37:39.807: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 6 16:38:04.133: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.2.0.178:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4523 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:38:04.134: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:38:04.342: INFO: Found all expected endpoints: [netserver-0] +Jun 6 16:38:04.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.2.1.100:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4523 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:38:04.354: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:38:04.566: INFO: Found all expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:38:04.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4523" for this suite. +Jun 6 16:38:26.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:38:26.861: INFO: namespace pod-network-test-4523 deletion completed in 22.284635775s + +• [SLOW TEST:47.156 seconds] +[sig-network] Networking +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:38:26.861: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 6 16:38:26.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 6 16:38:26.978: INFO: Waiting for terminating namespaces to be deleted... +Jun 6 16:38:26.985: INFO: +Logging pods the kubelet thinks is on node cncf-1 before test +Jun 6 16:38:27.010: INFO: kube-proxy-v8kjv from kube-system started at 2019-06-06 09:16:42 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: kube-dns-868d878686-p5pfd from kube-system started at 2019-06-06 15:38:23 +0000 UTC (3 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: Container kubedns ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: Container sidecar ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-nczqr from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: wormhole-fr2gz from kube-system started at 2019-06-06 09:17:01 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container wormhole ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: sonobuoy-e2e-job-f3c5b85dde7b4d05 from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container e2e ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-06 15:39:26 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 6 16:38:27.010: INFO: canal-xbh5x from kube-system started at 2019-06-06 09:16:42 +0000 UTC (2 container statuses recorded) +Jun 6 16:38:27.010: INFO: Container calico-node ready: true, restart count 0 +Jun 6 16:38:27.011: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 16:38:27.011: INFO: +Logging pods the kubelet thinks is on node cncf-2 before test +Jun 6 16:38:27.035: INFO: sonobuoy-systemd-logs-daemon-set-100b28c213194052-z92mc from heptio-sonobuoy started at 2019-06-06 15:39:28 +0000 UTC (2 container statuses recorded) +Jun 6 16:38:27.036: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 6 16:38:27.036: INFO: Container systemd-logs ready: true, restart count 0 +Jun 6 16:38:27.036: INFO: kube-proxy-7cmmv from kube-system started at 2019-06-06 09:15:52 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.036: INFO: Container kube-proxy ready: true, restart count 0 +Jun 6 16:38:27.036: INFO: canal-t4msm from kube-system started at 2019-06-06 09:15:52 +0000 UTC (2 container statuses recorded) +Jun 6 16:38:27.036: INFO: Container calico-node ready: true, restart count 0 +Jun 6 16:38:27.036: INFO: Container kube-flannel ready: true, restart count 2 +Jun 6 16:38:27.036: INFO: wormhole-bsxhq from kube-system started at 2019-06-06 09:16:11 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.036: INFO: Container wormhole ready: true, restart count 0 +Jun 6 16:38:27.036: INFO: kube-dns-autoscaler-6bfccfcbd4-cqwj2 from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.036: INFO: Container autoscaler ready: true, restart count 0 +Jun 6 16:38:27.037: INFO: kube-dns-868d878686-vl7zl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (3 container statuses recorded) +Jun 6 16:38:27.037: INFO: Container dnsmasq ready: true, restart count 0 +Jun 6 16:38:27.037: INFO: Container kubedns ready: true, restart count 0 +Jun 6 16:38:27.037: INFO: Container sidecar ready: true, restart count 0 +Jun 6 16:38:27.037: INFO: metrics-server-7c89fd4f7b-bnwzl from kube-system started at 2019-06-06 09:16:13 +0000 UTC (1 container statuses recorded) +Jun 6 16:38:27.037: INFO: Container metrics-server ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.15a5a9674c02ba4c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:38:28.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-4380" for this suite. +Jun 6 16:38:34.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:38:34.371: INFO: namespace sched-pred-4380 deletion completed in 6.27520347s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:7.511 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run default + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:38:34.372: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +[It] should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 6 16:38:34.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3682' +Jun 6 16:38:34.635: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 6 16:38:34.635: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" +STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created +[AfterEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +Jun 6 16:38:36.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete deployment e2e-test-nginx-deployment --namespace=kubectl-3682' +Jun 6 16:38:36.851: INFO: stderr: "" +Jun 6 16:38:36.851: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:38:36.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3682" for this suite. +Jun 6 16:40:42.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:40:43.142: INFO: namespace kubectl-3682 deletion completed in 2m6.282413648s + +• [SLOW TEST:128.770 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run default + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:40:43.142: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:40:43.254: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jun 6 16:40:43.274: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jun 6 16:40:48.282: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 6 16:40:48.282: INFO: Creating deployment "test-rolling-update-deployment" +Jun 6 16:40:48.298: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jun 6 16:40:48.310: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jun 6 16:40:50.329: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jun 6 16:40:50.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695436048, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695436048, loc:(*time.Location)(0x8a140e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695436048, loc:(*time.Location)(0x8a140e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695436048, loc:(*time.Location)(0x8a140e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67599b4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 6 16:40:52.351: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 6 16:40:52.378: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/deployments/test-rolling-update-deployment,UID:d6834d5d-8879-11e9-9995-4ad9032ea524,ResourceVersion:3959578047,Generation:1,CreationTimestamp:2019-06-06 16:40:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-06 16:40:48 +0000 UTC 2019-06-06 16:40:48 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-06 16:40:51 +0000 UTC 2019-06-06 16:40:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-67599b4d9" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 6 16:40:52.387: INFO: New ReplicaSet "test-rolling-update-deployment-67599b4d9" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/replicasets/test-rolling-update-deployment-67599b4d9,UID:d6881814-8879-11e9-9995-4ad9032ea524,ResourceVersion:3959578031,Generation:1,CreationTimestamp:2019-06-06 16:40:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d6834d5d-8879-11e9-9995-4ad9032ea524 0xc000c73fa0 0xc000c73fa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 6 16:40:52.387: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jun 6 16:40:52.388: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/replicasets/test-rolling-update-controller,UID:d383cbf2-8879-11e9-9995-4ad9032ea524,ResourceVersion:3959578046,Generation:2,CreationTimestamp:2019-06-06 16:40:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d6834d5d-8879-11e9-9995-4ad9032ea524 0xc000c73ec7 0xc000c73ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 6 16:40:52.393: INFO: Pod "test-rolling-update-deployment-67599b4d9-q5tgw" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9-q5tgw,GenerateName:test-rolling-update-deployment-67599b4d9-,Namespace:deployment-290,SelfLink:/api/v1/namespaces/deployment-290/pods/test-rolling-update-deployment-67599b4d9-q5tgw,UID:d68957e0-8879-11e9-9995-4ad9032ea524,ResourceVersion:3959578030,Generation:0,CreationTimestamp:2019-06-06 16:40:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{cni.projectcalico.org/podIP: 10.2.1.102/32,},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-67599b4d9 d6881814-8879-11e9-9995-4ad9032ea524 0xc0002fc2c0 0xc0002fc2c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v6kdl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v6kdl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v6kdl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:cncf-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0002fc7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0002fc840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:40:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:40:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:40:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-06 16:40:48 +0000 UTC }],Message:,Reason:,HostIP:51.68.79.184,PodIP:10.2.1.102,StartTime:2019-06-06 16:40:48 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-06 16:40:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1841d5fdc06bc2a16eb7f811390f03f6e5717bd53e06aeb9cbaec9f10dafd3e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:40:52.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-290" for this suite. +Jun 6 16:40:58.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:40:58.663: INFO: namespace deployment-290 deletion completed in 6.263853916s + +• [SLOW TEST:15.521 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:40:58.663: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:40:58.754: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:41:03.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4087" for this suite. +Jun 6 16:41:43.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:41:43.453: INFO: namespace pods-4087 deletion completed in 40.342348189s + +• [SLOW TEST:44.790 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:41:43.457: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-4814 +I0606 16:41:43.571865 15 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4814, replica count: 1 +I0606 16:41:44.622471 15 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0606 16:41:45.623078 15 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0606 16:41:46.623593 15 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 6 16:41:46.757: INFO: Created: latency-svc-lq7g5 +Jun 6 16:41:46.759: INFO: Got endpoints: latency-svc-lq7g5 [35.434183ms] +Jun 6 16:41:46.781: INFO: Created: latency-svc-6wdl6 +Jun 6 16:41:46.787: INFO: Created: latency-svc-7hjvw +Jun 6 16:41:46.789: INFO: Got endpoints: latency-svc-6wdl6 [29.219437ms] +Jun 6 16:41:46.793: INFO: Created: latency-svc-gd565 +Jun 6 16:41:46.794: INFO: Got endpoints: latency-svc-7hjvw [34.359722ms] +Jun 6 16:41:46.805: INFO: Got endpoints: latency-svc-gd565 [45.47011ms] +Jun 6 16:41:46.806: INFO: Created: latency-svc-hsfdp +Jun 6 16:41:46.808: INFO: Got endpoints: latency-svc-hsfdp [48.046639ms] +Jun 6 16:41:46.811: INFO: Created: latency-svc-9f5pz +Jun 6 16:41:46.814: INFO: Created: latency-svc-bm7xg +Jun 6 16:41:46.819: INFO: Got endpoints: latency-svc-9f5pz [59.654168ms] +Jun 6 16:41:46.822: INFO: Got endpoints: latency-svc-bm7xg [62.203447ms] +Jun 6 16:41:46.823: INFO: Created: latency-svc-5pz94 +Jun 6 16:41:46.830: INFO: Created: latency-svc-xvp62 +Jun 6 16:41:46.832: INFO: Got endpoints: latency-svc-5pz94 [72.22465ms] +Jun 6 16:41:46.836: INFO: Got endpoints: latency-svc-xvp62 [74.38364ms] +Jun 6 16:41:46.841: INFO: Created: latency-svc-cbdlz +Jun 6 16:41:46.844: INFO: Created: latency-svc-4htch +Jun 6 16:41:46.846: INFO: Got endpoints: latency-svc-cbdlz [86.162134ms] +Jun 6 16:41:46.851: INFO: Got endpoints: latency-svc-4htch [89.748353ms] +Jun 6 16:41:46.852: INFO: Created: latency-svc-z4j9d +Jun 6 16:41:46.858: INFO: Got endpoints: latency-svc-z4j9d [25.462377ms] +Jun 6 16:41:46.858: INFO: Created: latency-svc-fhfld +Jun 6 16:41:46.881: INFO: Created: latency-svc-665z4 +Jun 6 16:41:46.881: INFO: Got endpoints: latency-svc-fhfld [120.972358ms] +Jun 6 16:41:46.882: INFO: Created: latency-svc-t2qhd +Jun 6 16:41:46.897: INFO: Got endpoints: latency-svc-t2qhd [136.814964ms] +Jun 6 16:41:46.897: INFO: Created: latency-svc-v89t4 +Jun 6 16:41:46.897: INFO: Got endpoints: latency-svc-665z4 [136.284891ms] +Jun 6 16:41:46.901: INFO: Created: latency-svc-97pzd +Jun 6 16:41:46.905: INFO: Got endpoints: latency-svc-v89t4 [146.002049ms] +Jun 6 16:41:46.919: INFO: Created: latency-svc-q9th5 +Jun 6 16:41:46.919: INFO: Got endpoints: latency-svc-97pzd [158.643108ms] +Jun 6 16:41:46.925: INFO: Created: latency-svc-9j62p +Jun 6 16:41:46.927: INFO: Got endpoints: latency-svc-q9th5 [137.794965ms] +Jun 6 16:41:46.934: INFO: Created: latency-svc-99qwp +Jun 6 16:41:46.934: INFO: Got endpoints: latency-svc-9j62p [139.851913ms] +Jun 6 16:41:46.945: INFO: Got endpoints: latency-svc-99qwp [139.868957ms] +Jun 6 16:41:46.946: INFO: Created: latency-svc-gpqnf +Jun 6 16:41:46.952: INFO: Got endpoints: latency-svc-gpqnf [143.461915ms] +Jun 6 16:41:46.953: INFO: Created: latency-svc-nn466 +Jun 6 16:41:46.960: INFO: Got endpoints: latency-svc-nn466 [140.584652ms] +Jun 6 16:41:46.960: INFO: Created: latency-svc-t2vvk +Jun 6 16:41:46.966: INFO: Created: latency-svc-9lxwr +Jun 6 16:41:46.966: INFO: Got endpoints: latency-svc-t2vvk [144.021811ms] +Jun 6 16:41:46.972: INFO: Created: latency-svc-rl96l +Jun 6 16:41:46.972: INFO: Got endpoints: latency-svc-9lxwr [135.812084ms] +Jun 6 16:41:46.977: INFO: Created: latency-svc-k9l87 +Jun 6 16:41:46.979: INFO: Got endpoints: latency-svc-rl96l [132.191106ms] +Jun 6 16:41:46.985: INFO: Got endpoints: latency-svc-k9l87 [133.595772ms] +Jun 6 16:41:46.985: INFO: Created: latency-svc-wl7c2 +Jun 6 16:41:46.994: INFO: Got endpoints: latency-svc-wl7c2 [136.655059ms] +Jun 6 16:41:46.994: INFO: Created: latency-svc-sqlzg +Jun 6 16:41:46.996: INFO: Created: latency-svc-85mn8 +Jun 6 16:41:47.000: INFO: Got endpoints: latency-svc-sqlzg [119.374933ms] +Jun 6 16:41:47.007: INFO: Got endpoints: latency-svc-85mn8 [109.483143ms] +Jun 6 16:41:47.007: INFO: Created: latency-svc-4qc2v +Jun 6 16:41:47.010: INFO: Created: latency-svc-hskg5 +Jun 6 16:41:47.013: INFO: Got endpoints: latency-svc-4qc2v [114.7503ms] +Jun 6 16:41:47.015: INFO: Created: latency-svc-qf96q +Jun 6 16:41:47.021: INFO: Got endpoints: latency-svc-qf96q [101.433845ms] +Jun 6 16:41:47.022: INFO: Got endpoints: latency-svc-hskg5 [116.121642ms] +Jun 6 16:41:47.026: INFO: Created: latency-svc-rdlt9 +Jun 6 16:41:47.034: INFO: Created: latency-svc-hnvvt +Jun 6 16:41:47.034: INFO: Got endpoints: latency-svc-rdlt9 [107.163418ms] +Jun 6 16:41:47.040: INFO: Created: latency-svc-6f9w7 +Jun 6 16:41:47.040: INFO: Got endpoints: latency-svc-hnvvt [105.965921ms] +Jun 6 16:41:47.044: INFO: Created: latency-svc-dl2g7 +Jun 6 16:41:47.047: INFO: Got endpoints: latency-svc-6f9w7 [101.792048ms] +Jun 6 16:41:47.049: INFO: Created: latency-svc-tqlrd +Jun 6 16:41:47.052: INFO: Got endpoints: latency-svc-dl2g7 [99.823306ms] +Jun 6 16:41:47.058: INFO: Created: latency-svc-svswt +Jun 6 16:41:47.064: INFO: Created: latency-svc-tqddr +Jun 6 16:41:47.069: INFO: Created: latency-svc-ngnwr +Jun 6 16:41:47.076: INFO: Created: latency-svc-vqgxw +Jun 6 16:41:47.081: INFO: Created: latency-svc-dbfgl +Jun 6 16:41:47.088: INFO: Created: latency-svc-npgsk +Jun 6 16:41:47.095: INFO: Created: latency-svc-grcww +Jun 6 16:41:47.102: INFO: Created: latency-svc-z94km +Jun 6 16:41:47.106: INFO: Got endpoints: latency-svc-tqlrd [146.217296ms] +Jun 6 16:41:47.108: INFO: Created: latency-svc-nqjsp +Jun 6 16:41:47.118: INFO: Created: latency-svc-pxzvt +Jun 6 16:41:47.125: INFO: Created: latency-svc-c6ggg +Jun 6 16:41:47.133: INFO: Created: latency-svc-jzs2v +Jun 6 16:41:47.139: INFO: Created: latency-svc-8n8xc +Jun 6 16:41:47.147: INFO: Created: latency-svc-4gnqk +Jun 6 16:41:47.152: INFO: Created: latency-svc-624xk +Jun 6 16:41:47.155: INFO: Got endpoints: latency-svc-svswt [188.910225ms] +Jun 6 16:41:47.175: INFO: Created: latency-svc-bspxh +Jun 6 16:41:47.208: INFO: Got endpoints: latency-svc-tqddr [235.72573ms] +Jun 6 16:41:47.230: INFO: Created: latency-svc-brdj4 +Jun 6 16:41:47.258: INFO: Got endpoints: latency-svc-ngnwr [279.156605ms] +Jun 6 16:41:47.282: INFO: Created: latency-svc-zg4kq +Jun 6 16:41:47.305: INFO: Got endpoints: latency-svc-vqgxw [320.224713ms] +Jun 6 16:41:47.327: INFO: Created: latency-svc-bzslf +Jun 6 16:41:47.356: INFO: Got endpoints: latency-svc-dbfgl [361.271328ms] +Jun 6 16:41:47.376: INFO: Created: latency-svc-tjnbc +Jun 6 16:41:47.423: INFO: Got endpoints: latency-svc-npgsk [422.069305ms] +Jun 6 16:41:47.444: INFO: Created: latency-svc-vhmp6 +Jun 6 16:41:47.456: INFO: Got endpoints: latency-svc-grcww [448.990428ms] +Jun 6 16:41:47.475: INFO: Created: latency-svc-mll4l +Jun 6 16:41:47.508: INFO: Got endpoints: latency-svc-z94km [494.187489ms] +Jun 6 16:41:47.534: INFO: Created: latency-svc-9nmjn +Jun 6 16:41:47.557: INFO: Got endpoints: latency-svc-nqjsp [535.130136ms] +Jun 6 16:41:47.581: INFO: Created: latency-svc-f9mwt +Jun 6 16:41:47.607: INFO: Got endpoints: latency-svc-pxzvt [585.385004ms] +Jun 6 16:41:47.628: INFO: Created: latency-svc-87nn6 +Jun 6 16:41:47.658: INFO: Got endpoints: latency-svc-c6ggg [623.799235ms] +Jun 6 16:41:47.678: INFO: Created: latency-svc-cqxw8 +Jun 6 16:41:47.714: INFO: Got endpoints: latency-svc-jzs2v [673.83102ms] +Jun 6 16:41:47.735: INFO: Created: latency-svc-vklgg +Jun 6 16:41:47.757: INFO: Got endpoints: latency-svc-8n8xc [710.082569ms] +Jun 6 16:41:47.789: INFO: Created: latency-svc-wg84b +Jun 6 16:41:47.804: INFO: Got endpoints: latency-svc-4gnqk [752.542466ms] +Jun 6 16:41:47.828: INFO: Created: latency-svc-vxcvt +Jun 6 16:41:47.859: INFO: Got endpoints: latency-svc-624xk [753.026054ms] +Jun 6 16:41:47.895: INFO: Created: latency-svc-c88cb +Jun 6 16:41:47.909: INFO: Got endpoints: latency-svc-bspxh [753.178586ms] +Jun 6 16:41:47.938: INFO: Created: latency-svc-4sq4f +Jun 6 16:41:47.956: INFO: Got endpoints: latency-svc-brdj4 [748.160682ms] +Jun 6 16:41:47.976: INFO: Created: latency-svc-wcg5q +Jun 6 16:41:48.007: INFO: Got endpoints: latency-svc-zg4kq [748.819263ms] +Jun 6 16:41:48.028: INFO: Created: latency-svc-gwrmh +Jun 6 16:41:48.055: INFO: Got endpoints: latency-svc-bzslf [749.763821ms] +Jun 6 16:41:48.401: INFO: Created: latency-svc-5fdwr +Jun 6 16:41:48.407: INFO: Got endpoints: latency-svc-vhmp6 [983.986706ms] +Jun 6 16:41:48.407: INFO: Got endpoints: latency-svc-f9mwt [849.822003ms] +Jun 6 16:41:48.407: INFO: Got endpoints: latency-svc-mll4l [951.288207ms] +Jun 6 16:41:48.407: INFO: Got endpoints: latency-svc-9nmjn [899.787687ms] +Jun 6 16:41:48.408: INFO: Got endpoints: latency-svc-tjnbc [1.051997337s] +Jun 6 16:41:48.412: INFO: Got endpoints: latency-svc-87nn6 [805.408129ms] +Jun 6 16:41:48.415: INFO: Got endpoints: latency-svc-cqxw8 [757.23838ms] +Jun 6 16:41:48.427: INFO: Created: latency-svc-5k8sx +Jun 6 16:41:48.433: INFO: Created: latency-svc-r9xwv +Jun 6 16:41:48.440: INFO: Created: latency-svc-d2x7c +Jun 6 16:41:48.447: INFO: Created: latency-svc-mh6qx +Jun 6 16:41:48.453: INFO: Got endpoints: latency-svc-vklgg [738.844398ms] +Jun 6 16:41:48.454: INFO: Created: latency-svc-sj54h +Jun 6 16:41:48.459: INFO: Created: latency-svc-t748w +Jun 6 16:41:48.465: INFO: Created: latency-svc-thcfx +Jun 6 16:41:48.471: INFO: Created: latency-svc-5zdqw +Jun 6 16:41:48.586: INFO: Got endpoints: latency-svc-wg84b [828.468235ms] +Jun 6 16:41:48.588: INFO: Got endpoints: latency-svc-vxcvt [783.354346ms] +Jun 6 16:41:48.608: INFO: Created: latency-svc-h7wxg +Jun 6 16:41:48.609: INFO: Got endpoints: latency-svc-c88cb [748.905753ms] +Jun 6 16:41:48.613: INFO: Created: latency-svc-8k5nb +Jun 6 16:41:48.627: INFO: Created: latency-svc-fjpxp +Jun 6 16:41:48.656: INFO: Got endpoints: latency-svc-4sq4f [747.077369ms] +Jun 6 16:41:48.682: INFO: Created: latency-svc-tcmcx +Jun 6 16:41:48.708: INFO: Got endpoints: latency-svc-wcg5q [751.7391ms] +Jun 6 16:41:48.730: INFO: Created: latency-svc-cdxx7 +Jun 6 16:41:48.757: INFO: Got endpoints: latency-svc-gwrmh [749.584632ms] +Jun 6 16:41:48.777: INFO: Created: latency-svc-jgm2f +Jun 6 16:41:48.806: INFO: Got endpoints: latency-svc-5fdwr [749.921203ms] +Jun 6 16:41:48.827: INFO: Created: latency-svc-lcb8j +Jun 6 16:41:48.855: INFO: Got endpoints: latency-svc-5k8sx [447.945207ms] +Jun 6 16:41:48.877: INFO: Created: latency-svc-w88vb +Jun 6 16:41:48.907: INFO: Got endpoints: latency-svc-r9xwv [500.04306ms] +Jun 6 16:41:48.930: INFO: Created: latency-svc-px4qd +Jun 6 16:41:49.003: INFO: Got endpoints: latency-svc-d2x7c [596.040992ms] +Jun 6 16:41:49.019: INFO: Got endpoints: latency-svc-mh6qx [610.732309ms] +Jun 6 16:41:49.029: INFO: Created: latency-svc-tfpzb +Jun 6 16:41:49.044: INFO: Created: latency-svc-n27db +Jun 6 16:41:49.058: INFO: Got endpoints: latency-svc-sj54h [650.059456ms] +Jun 6 16:41:49.085: INFO: Created: latency-svc-dkn68 +Jun 6 16:41:49.104: INFO: Got endpoints: latency-svc-t748w [691.083355ms] +Jun 6 16:41:49.120: INFO: Created: latency-svc-nqlxb +Jun 6 16:41:49.157: INFO: Got endpoints: latency-svc-thcfx [742.200151ms] +Jun 6 16:41:49.181: INFO: Created: latency-svc-wprd5 +Jun 6 16:41:49.208: INFO: Got endpoints: latency-svc-5zdqw [754.842747ms] +Jun 6 16:41:49.235: INFO: Created: latency-svc-cm7mt +Jun 6 16:41:49.258: INFO: Got endpoints: latency-svc-h7wxg [672.406542ms] +Jun 6 16:41:49.279: INFO: Created: latency-svc-ps7dh +Jun 6 16:41:49.312: INFO: Got endpoints: latency-svc-8k5nb [723.912839ms] +Jun 6 16:41:49.333: INFO: Created: latency-svc-2kh2n +Jun 6 16:41:49.356: INFO: Got endpoints: latency-svc-fjpxp [747.532301ms] +Jun 6 16:41:49.379: INFO: Created: latency-svc-vm9fn +Jun 6 16:41:49.408: INFO: Got endpoints: latency-svc-tcmcx [751.647938ms] +Jun 6 16:41:49.430: INFO: Created: latency-svc-zrfpq +Jun 6 16:41:49.459: INFO: Got endpoints: latency-svc-cdxx7 [751.314301ms] +Jun 6 16:41:49.482: INFO: Created: latency-svc-ndcmt +Jun 6 16:41:49.506: INFO: Got endpoints: latency-svc-jgm2f [749.296285ms] +Jun 6 16:41:49.528: INFO: Created: latency-svc-5mwn2 +Jun 6 16:41:49.555: INFO: Got endpoints: latency-svc-lcb8j [749.77297ms] +Jun 6 16:41:49.579: INFO: Created: latency-svc-gmwjn +Jun 6 16:41:49.610: INFO: Got endpoints: latency-svc-w88vb [755.263085ms] +Jun 6 16:41:49.634: INFO: Created: latency-svc-76msr +Jun 6 16:41:49.656: INFO: Got endpoints: latency-svc-px4qd [749.158881ms] +Jun 6 16:41:49.684: INFO: Created: latency-svc-4xd9l +Jun 6 16:41:49.708: INFO: Got endpoints: latency-svc-tfpzb [703.90426ms] +Jun 6 16:41:49.729: INFO: Created: latency-svc-f8w79 +Jun 6 16:41:49.756: INFO: Got endpoints: latency-svc-n27db [737.24934ms] +Jun 6 16:41:49.779: INFO: Created: latency-svc-f2rc4 +Jun 6 16:41:49.815: INFO: Got endpoints: latency-svc-dkn68 [756.426334ms] +Jun 6 16:41:49.847: INFO: Created: latency-svc-95596 +Jun 6 16:41:49.862: INFO: Got endpoints: latency-svc-nqlxb [758.582634ms] +Jun 6 16:41:49.904: INFO: Created: latency-svc-ms2np +Jun 6 16:41:49.905: INFO: Got endpoints: latency-svc-wprd5 [748.014468ms] +Jun 6 16:41:49.925: INFO: Created: latency-svc-bqp2n +Jun 6 16:41:49.957: INFO: Got endpoints: latency-svc-cm7mt [748.621077ms] +Jun 6 16:41:49.978: INFO: Created: latency-svc-f9n9t +Jun 6 16:41:50.008: INFO: Got endpoints: latency-svc-ps7dh [749.053873ms] +Jun 6 16:41:50.034: INFO: Created: latency-svc-rl88w +Jun 6 16:41:50.056: INFO: Got endpoints: latency-svc-2kh2n [743.765682ms] +Jun 6 16:41:50.078: INFO: Created: latency-svc-nwshx +Jun 6 16:41:50.111: INFO: Got endpoints: latency-svc-vm9fn [754.856431ms] +Jun 6 16:41:50.145: INFO: Created: latency-svc-qphvc +Jun 6 16:41:50.158: INFO: Got endpoints: latency-svc-zrfpq [750.365268ms] +Jun 6 16:41:50.177: INFO: Created: latency-svc-jxbvb +Jun 6 16:41:50.208: INFO: Got endpoints: latency-svc-ndcmt [748.360938ms] +Jun 6 16:41:50.233: INFO: Created: latency-svc-pqzml +Jun 6 16:41:50.257: INFO: Got endpoints: latency-svc-5mwn2 [750.373487ms] +Jun 6 16:41:50.277: INFO: Created: latency-svc-9g8rm +Jun 6 16:41:50.309: INFO: Got endpoints: latency-svc-gmwjn [751.308058ms] +Jun 6 16:41:50.331: INFO: Created: latency-svc-q488v +Jun 6 16:41:50.370: INFO: Got endpoints: latency-svc-76msr [759.458346ms] +Jun 6 16:41:50.389: INFO: Created: latency-svc-hp5tr +Jun 6 16:41:50.405: INFO: Got endpoints: latency-svc-4xd9l [747.980178ms] +Jun 6 16:41:50.426: INFO: Created: latency-svc-9jvps +Jun 6 16:41:50.455: INFO: Got endpoints: latency-svc-f8w79 [747.684605ms] +Jun 6 16:41:50.481: INFO: Created: latency-svc-drvsh +Jun 6 16:41:50.506: INFO: Got endpoints: latency-svc-f2rc4 [750.223421ms] +Jun 6 16:41:50.528: INFO: Created: latency-svc-d8vnt +Jun 6 16:41:50.557: INFO: Got endpoints: latency-svc-95596 [742.14005ms] +Jun 6 16:41:50.579: INFO: Created: latency-svc-gbxnb +Jun 6 16:41:50.608: INFO: Got endpoints: latency-svc-ms2np [744.349111ms] +Jun 6 16:41:50.626: INFO: Created: latency-svc-tngph +Jun 6 16:41:50.656: INFO: Got endpoints: latency-svc-bqp2n [750.830758ms] +Jun 6 16:41:50.679: INFO: Created: latency-svc-sxc54 +Jun 6 16:41:50.705: INFO: Got endpoints: latency-svc-f9n9t [747.41466ms] +Jun 6 16:41:50.729: INFO: Created: latency-svc-8jnj9 +Jun 6 16:41:50.761: INFO: Got endpoints: latency-svc-rl88w [753.298048ms] +Jun 6 16:41:50.781: INFO: Created: latency-svc-hkm92 +Jun 6 16:41:50.808: INFO: Got endpoints: latency-svc-nwshx [752.104779ms] +Jun 6 16:41:50.830: INFO: Created: latency-svc-8ntlx +Jun 6 16:41:50.856: INFO: Got endpoints: latency-svc-qphvc [744.912303ms] +Jun 6 16:41:50.879: INFO: Created: latency-svc-wjfqx +Jun 6 16:41:50.908: INFO: Got endpoints: latency-svc-jxbvb [749.731549ms] +Jun 6 16:41:50.932: INFO: Created: latency-svc-dwmnd +Jun 6 16:41:50.957: INFO: Got endpoints: latency-svc-pqzml [749.651529ms] +Jun 6 16:41:50.980: INFO: Created: latency-svc-zbnfr +Jun 6 16:41:51.011: INFO: Got endpoints: latency-svc-9g8rm [754.65542ms] +Jun 6 16:41:51.036: INFO: Created: latency-svc-f9sqm +Jun 6 16:41:51.061: INFO: Got endpoints: latency-svc-q488v [751.716974ms] +Jun 6 16:41:51.083: INFO: Created: latency-svc-psx8s +Jun 6 16:41:51.106: INFO: Got endpoints: latency-svc-hp5tr [736.356579ms] +Jun 6 16:41:51.130: INFO: Created: latency-svc-dkssk +Jun 6 16:41:51.160: INFO: Got endpoints: latency-svc-9jvps [754.398573ms] +Jun 6 16:41:51.181: INFO: Created: latency-svc-v6jcj +Jun 6 16:41:51.213: INFO: Got endpoints: latency-svc-drvsh [757.232359ms] +Jun 6 16:41:51.234: INFO: Created: latency-svc-bvd8j +Jun 6 16:41:51.256: INFO: Got endpoints: latency-svc-d8vnt [749.349168ms] +Jun 6 16:41:51.280: INFO: Created: latency-svc-f558z +Jun 6 16:41:51.307: INFO: Got endpoints: latency-svc-gbxnb [749.407443ms] +Jun 6 16:41:51.327: INFO: Created: latency-svc-js7mh +Jun 6 16:41:51.355: INFO: Got endpoints: latency-svc-tngph [746.552689ms] +Jun 6 16:41:51.377: INFO: Created: latency-svc-sjjwc +Jun 6 16:41:51.409: INFO: Got endpoints: latency-svc-sxc54 [752.029859ms] +Jun 6 16:41:51.428: INFO: Created: latency-svc-pt9pj +Jun 6 16:41:51.456: INFO: Got endpoints: latency-svc-8jnj9 [751.541887ms] +Jun 6 16:41:51.480: INFO: Created: latency-svc-ss8zk +Jun 6 16:41:51.507: INFO: Got endpoints: latency-svc-hkm92 [745.879856ms] +Jun 6 16:41:51.535: INFO: Created: latency-svc-2r5jr +Jun 6 16:41:51.555: INFO: Got endpoints: latency-svc-8ntlx [746.926763ms] +Jun 6 16:41:51.577: INFO: Created: latency-svc-g4j6h +Jun 6 16:41:51.607: INFO: Got endpoints: latency-svc-wjfqx [750.30847ms] +Jun 6 16:41:51.625: INFO: Created: latency-svc-nnd8j +Jun 6 16:41:51.657: INFO: Got endpoints: latency-svc-dwmnd [748.808411ms] +Jun 6 16:41:51.678: INFO: Created: latency-svc-lzxcr +Jun 6 16:41:51.705: INFO: Got endpoints: latency-svc-zbnfr [747.107289ms] +Jun 6 16:41:51.727: INFO: Created: latency-svc-mfxks +Jun 6 16:41:51.755: INFO: Got endpoints: latency-svc-f9sqm [743.508859ms] +Jun 6 16:41:51.775: INFO: Created: latency-svc-pldkn +Jun 6 16:41:51.804: INFO: Got endpoints: latency-svc-psx8s [743.163286ms] +Jun 6 16:41:51.828: INFO: Created: latency-svc-mt9vg +Jun 6 16:41:51.857: INFO: Got endpoints: latency-svc-dkssk [750.448575ms] +Jun 6 16:41:51.879: INFO: Created: latency-svc-ctlk6 +Jun 6 16:41:51.912: INFO: Got endpoints: latency-svc-v6jcj [752.375368ms] +Jun 6 16:41:51.936: INFO: Created: latency-svc-s4n9w +Jun 6 16:41:51.955: INFO: Got endpoints: latency-svc-bvd8j [742.216369ms] +Jun 6 16:41:51.975: INFO: Created: latency-svc-f7cvs +Jun 6 16:41:52.007: INFO: Got endpoints: latency-svc-f558z [751.04401ms] +Jun 6 16:41:52.025: INFO: Created: latency-svc-9kglr +Jun 6 16:41:52.055: INFO: Got endpoints: latency-svc-js7mh [748.45573ms] +Jun 6 16:41:52.073: INFO: Created: latency-svc-wbw6k +Jun 6 16:41:52.107: INFO: Got endpoints: latency-svc-sjjwc [751.719656ms] +Jun 6 16:41:52.127: INFO: Created: latency-svc-j2p9v +Jun 6 16:41:52.156: INFO: Got endpoints: latency-svc-pt9pj [747.08744ms] +Jun 6 16:41:52.179: INFO: Created: latency-svc-257rc +Jun 6 16:41:52.204: INFO: Got endpoints: latency-svc-ss8zk [747.837042ms] +Jun 6 16:41:52.223: INFO: Created: latency-svc-v7tmf +Jun 6 16:41:52.260: INFO: Got endpoints: latency-svc-2r5jr [752.476204ms] +Jun 6 16:41:52.280: INFO: Created: latency-svc-8d9zb +Jun 6 16:41:52.311: INFO: Got endpoints: latency-svc-g4j6h [755.988113ms] +Jun 6 16:41:52.335: INFO: Created: latency-svc-27p67 +Jun 6 16:41:52.357: INFO: Got endpoints: latency-svc-nnd8j [750.073156ms] +Jun 6 16:41:52.379: INFO: Created: latency-svc-rwjqw +Jun 6 16:41:52.406: INFO: Got endpoints: latency-svc-lzxcr [748.485183ms] +Jun 6 16:41:52.425: INFO: Created: latency-svc-cdw4p +Jun 6 16:41:52.459: INFO: Got endpoints: latency-svc-mfxks [754.552126ms] +Jun 6 16:41:52.478: INFO: Created: latency-svc-7vkpx +Jun 6 16:41:52.507: INFO: Got endpoints: latency-svc-pldkn [751.367132ms] +Jun 6 16:41:52.527: INFO: Created: latency-svc-sdc7z +Jun 6 16:41:52.572: INFO: Got endpoints: latency-svc-mt9vg [768.078074ms] +Jun 6 16:41:52.600: INFO: Created: latency-svc-qjvkl +Jun 6 16:41:52.613: INFO: Got endpoints: latency-svc-ctlk6 [755.894542ms] +Jun 6 16:41:52.631: INFO: Created: latency-svc-4mbtw +Jun 6 16:41:52.661: INFO: Got endpoints: latency-svc-s4n9w [748.50593ms] +Jun 6 16:41:52.681: INFO: Created: latency-svc-6hnsp +Jun 6 16:41:52.706: INFO: Got endpoints: latency-svc-f7cvs [750.393401ms] +Jun 6 16:41:52.723: INFO: Created: latency-svc-wmd9s +Jun 6 16:41:52.755: INFO: Got endpoints: latency-svc-9kglr [748.19872ms] +Jun 6 16:41:52.777: INFO: Created: latency-svc-v4mf4 +Jun 6 16:41:52.811: INFO: Got endpoints: latency-svc-wbw6k [756.070058ms] +Jun 6 16:41:52.832: INFO: Created: latency-svc-rvkhr +Jun 6 16:41:52.858: INFO: Got endpoints: latency-svc-j2p9v [751.425584ms] +Jun 6 16:41:52.880: INFO: Created: latency-svc-8jrn4 +Jun 6 16:41:52.912: INFO: Got endpoints: latency-svc-257rc [756.46859ms] +Jun 6 16:41:52.934: INFO: Created: latency-svc-vbp95 +Jun 6 16:41:52.956: INFO: Got endpoints: latency-svc-v7tmf [751.409066ms] +Jun 6 16:41:52.982: INFO: Created: latency-svc-sb87z +Jun 6 16:41:53.004: INFO: Got endpoints: latency-svc-8d9zb [744.280891ms] +Jun 6 16:41:53.028: INFO: Created: latency-svc-fhs9q +Jun 6 16:41:53.055: INFO: Got endpoints: latency-svc-27p67 [743.922473ms] +Jun 6 16:41:53.076: INFO: Created: latency-svc-xgf4j +Jun 6 16:41:53.106: INFO: Got endpoints: latency-svc-rwjqw [749.386604ms] +Jun 6 16:41:53.130: INFO: Created: latency-svc-q5cxw +Jun 6 16:41:53.156: INFO: Got endpoints: latency-svc-cdw4p [749.956873ms] +Jun 6 16:41:53.175: INFO: Created: latency-svc-tjhg4 +Jun 6 16:41:53.210: INFO: Got endpoints: latency-svc-7vkpx [750.746161ms] +Jun 6 16:41:53.232: INFO: Created: latency-svc-w4k9f +Jun 6 16:41:53.257: INFO: Got endpoints: latency-svc-sdc7z [749.751969ms] +Jun 6 16:41:53.275: INFO: Created: latency-svc-glb2f +Jun 6 16:41:53.307: INFO: Got endpoints: latency-svc-qjvkl [734.75738ms] +Jun 6 16:41:53.329: INFO: Created: latency-svc-6ftt9 +Jun 6 16:41:53.356: INFO: Got endpoints: latency-svc-4mbtw [742.519254ms] +Jun 6 16:41:53.376: INFO: Created: latency-svc-tnndq +Jun 6 16:41:53.405: INFO: Got endpoints: latency-svc-6hnsp [744.369944ms] +Jun 6 16:41:53.422: INFO: Created: latency-svc-j7rkv +Jun 6 16:41:53.454: INFO: Got endpoints: latency-svc-wmd9s [748.723272ms] +Jun 6 16:41:53.477: INFO: Created: latency-svc-5d6l4 +Jun 6 16:41:53.508: INFO: Got endpoints: latency-svc-v4mf4 [752.149115ms] +Jun 6 16:41:53.532: INFO: Created: latency-svc-bfzb2 +Jun 6 16:41:53.556: INFO: Got endpoints: latency-svc-rvkhr [744.107148ms] +Jun 6 16:41:53.576: INFO: Created: latency-svc-h6c5l +Jun 6 16:41:53.606: INFO: Got endpoints: latency-svc-8jrn4 [747.975656ms] +Jun 6 16:41:53.629: INFO: Created: latency-svc-7qdwv +Jun 6 16:41:53.655: INFO: Got endpoints: latency-svc-vbp95 [742.167021ms] +Jun 6 16:41:53.676: INFO: Created: latency-svc-5g4jr +Jun 6 16:41:53.705: INFO: Got endpoints: latency-svc-sb87z [748.697361ms] +Jun 6 16:41:53.730: INFO: Created: latency-svc-k24zk +Jun 6 16:41:53.758: INFO: Got endpoints: latency-svc-fhs9q [754.108137ms] +Jun 6 16:41:53.782: INFO: Created: latency-svc-nns6s +Jun 6 16:41:53.808: INFO: Got endpoints: latency-svc-xgf4j [752.424282ms] +Jun 6 16:41:53.827: INFO: Created: latency-svc-s42bg +Jun 6 16:41:53.856: INFO: Got endpoints: latency-svc-q5cxw [749.127722ms] +Jun 6 16:41:53.877: INFO: Created: latency-svc-4w82l +Jun 6 16:41:53.906: INFO: Got endpoints: latency-svc-tjhg4 [749.590251ms] +Jun 6 16:41:53.964: INFO: Created: latency-svc-7n8xx +Jun 6 16:41:53.974: INFO: Got endpoints: latency-svc-w4k9f [764.080499ms] +Jun 6 16:41:54.013: INFO: Got endpoints: latency-svc-glb2f [755.971452ms] +Jun 6 16:41:54.020: INFO: Created: latency-svc-wtcj7 +Jun 6 16:41:54.032: INFO: Created: latency-svc-bm7mm +Jun 6 16:41:54.062: INFO: Got endpoints: latency-svc-6ftt9 [754.294373ms] +Jun 6 16:41:54.080: INFO: Created: latency-svc-2rdlt +Jun 6 16:41:54.106: INFO: Got endpoints: latency-svc-tnndq [750.032712ms] +Jun 6 16:41:54.129: INFO: Created: latency-svc-khs2b +Jun 6 16:41:54.195: INFO: Got endpoints: latency-svc-j7rkv [789.795808ms] +Jun 6 16:41:54.206: INFO: Got endpoints: latency-svc-5d6l4 [751.854183ms] +Jun 6 16:41:54.220: INFO: Created: latency-svc-2p722 +Jun 6 16:41:54.225: INFO: Created: latency-svc-hpp7l +Jun 6 16:41:54.255: INFO: Got endpoints: latency-svc-bfzb2 [747.32059ms] +Jun 6 16:41:54.277: INFO: Created: latency-svc-l7ckw +Jun 6 16:41:54.310: INFO: Got endpoints: latency-svc-h6c5l [754.702188ms] +Jun 6 16:41:54.332: INFO: Created: latency-svc-224xw +Jun 6 16:41:54.359: INFO: Got endpoints: latency-svc-7qdwv [752.359074ms] +Jun 6 16:41:54.381: INFO: Created: latency-svc-k8tw4 +Jun 6 16:41:54.406: INFO: Got endpoints: latency-svc-5g4jr [751.219929ms] +Jun 6 16:41:54.430: INFO: Created: latency-svc-dtldm +Jun 6 16:41:54.455: INFO: Got endpoints: latency-svc-k24zk [749.821544ms] +Jun 6 16:41:54.473: INFO: Created: latency-svc-mr2z7 +Jun 6 16:41:54.505: INFO: Got endpoints: latency-svc-nns6s [746.620416ms] +Jun 6 16:41:54.526: INFO: Created: latency-svc-9b2rf +Jun 6 16:41:54.555: INFO: Got endpoints: latency-svc-s42bg [747.229566ms] +Jun 6 16:41:54.573: INFO: Created: latency-svc-r729w +Jun 6 16:41:54.607: INFO: Got endpoints: latency-svc-4w82l [751.250954ms] +Jun 6 16:41:54.661: INFO: Got endpoints: latency-svc-7n8xx [755.701394ms] +Jun 6 16:41:54.706: INFO: Got endpoints: latency-svc-wtcj7 [731.222124ms] +Jun 6 16:41:54.755: INFO: Got endpoints: latency-svc-bm7mm [741.668484ms] +Jun 6 16:41:54.809: INFO: Got endpoints: latency-svc-2rdlt [747.111791ms] +Jun 6 16:41:54.855: INFO: Got endpoints: latency-svc-khs2b [748.773028ms] +Jun 6 16:41:54.912: INFO: Got endpoints: latency-svc-2p722 [716.78592ms] +Jun 6 16:41:54.956: INFO: Got endpoints: latency-svc-hpp7l [749.050216ms] +Jun 6 16:41:55.006: INFO: Got endpoints: latency-svc-l7ckw [750.825722ms] +Jun 6 16:41:55.061: INFO: Got endpoints: latency-svc-224xw [750.139261ms] +Jun 6 16:41:55.112: INFO: Got endpoints: latency-svc-k8tw4 [752.39288ms] +Jun 6 16:41:55.160: INFO: Got endpoints: latency-svc-dtldm [753.196484ms] +Jun 6 16:41:55.206: INFO: Got endpoints: latency-svc-mr2z7 [750.599645ms] +Jun 6 16:41:55.257: INFO: Got endpoints: latency-svc-9b2rf [751.299394ms] +Jun 6 16:41:55.307: INFO: Got endpoints: latency-svc-r729w [751.393992ms] +Jun 6 16:41:55.307: INFO: Latencies: [25.462377ms 29.219437ms 34.359722ms 45.47011ms 48.046639ms 59.654168ms 62.203447ms 72.22465ms 74.38364ms 86.162134ms 89.748353ms 99.823306ms 101.433845ms 101.792048ms 105.965921ms 107.163418ms 109.483143ms 114.7503ms 116.121642ms 119.374933ms 120.972358ms 132.191106ms 133.595772ms 135.812084ms 136.284891ms 136.655059ms 136.814964ms 137.794965ms 139.851913ms 139.868957ms 140.584652ms 143.461915ms 144.021811ms 146.002049ms 146.217296ms 158.643108ms 188.910225ms 235.72573ms 279.156605ms 320.224713ms 361.271328ms 422.069305ms 447.945207ms 448.990428ms 494.187489ms 500.04306ms 535.130136ms 585.385004ms 596.040992ms 610.732309ms 623.799235ms 650.059456ms 672.406542ms 673.83102ms 691.083355ms 703.90426ms 710.082569ms 716.78592ms 723.912839ms 731.222124ms 734.75738ms 736.356579ms 737.24934ms 738.844398ms 741.668484ms 742.14005ms 742.167021ms 742.200151ms 742.216369ms 742.519254ms 743.163286ms 743.508859ms 743.765682ms 743.922473ms 744.107148ms 744.280891ms 744.349111ms 744.369944ms 744.912303ms 745.879856ms 746.552689ms 746.620416ms 746.926763ms 747.077369ms 747.08744ms 747.107289ms 747.111791ms 747.229566ms 747.32059ms 747.41466ms 747.532301ms 747.684605ms 747.837042ms 747.975656ms 747.980178ms 748.014468ms 748.160682ms 748.19872ms 748.360938ms 748.45573ms 748.485183ms 748.50593ms 748.621077ms 748.697361ms 748.723272ms 748.773028ms 748.808411ms 748.819263ms 748.905753ms 749.050216ms 749.053873ms 749.127722ms 749.158881ms 749.296285ms 749.349168ms 749.386604ms 749.407443ms 749.584632ms 749.590251ms 749.651529ms 749.731549ms 749.751969ms 749.763821ms 749.77297ms 749.821544ms 749.921203ms 749.956873ms 750.032712ms 750.073156ms 750.139261ms 750.223421ms 750.30847ms 750.365268ms 750.373487ms 750.393401ms 750.448575ms 750.599645ms 750.746161ms 750.825722ms 750.830758ms 751.04401ms 751.219929ms 751.250954ms 751.299394ms 751.308058ms 751.314301ms 751.367132ms 751.393992ms 751.409066ms 751.425584ms 751.541887ms 751.647938ms 751.716974ms 751.719656ms 751.7391ms 751.854183ms 752.029859ms 752.104779ms 752.149115ms 752.359074ms 752.375368ms 752.39288ms 752.424282ms 752.476204ms 752.542466ms 753.026054ms 753.178586ms 753.196484ms 753.298048ms 754.108137ms 754.294373ms 754.398573ms 754.552126ms 754.65542ms 754.702188ms 754.842747ms 754.856431ms 755.263085ms 755.701394ms 755.894542ms 755.971452ms 755.988113ms 756.070058ms 756.426334ms 756.46859ms 757.232359ms 757.23838ms 758.582634ms 759.458346ms 764.080499ms 768.078074ms 783.354346ms 789.795808ms 805.408129ms 828.468235ms 849.822003ms 899.787687ms 951.288207ms 983.986706ms 1.051997337s] +Jun 6 16:41:55.307: INFO: 50 %ile: 748.485183ms +Jun 6 16:41:55.307: INFO: 90 %ile: 755.971452ms +Jun 6 16:41:55.307: INFO: 99 %ile: 983.986706ms +Jun 6 16:41:55.307: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:41:55.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-4814" for this suite. +Jun 6 16:42:13.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:42:13.557: INFO: namespace svc-latency-4814 deletion completed in 18.242497957s + +• [SLOW TEST:30.101 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should not be very high [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:42:13.560: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-2635 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 6 16:42:13.661: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 6 16:42:35.809: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.2.0.182 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:42:35.809: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:42:37.031: INFO: Found all expected endpoints: [netserver-0] +Jun 6 16:42:37.039: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.2.1.104 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2635 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 6 16:42:37.039: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +Jun 6 16:42:38.253: INFO: Found all expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:42:38.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-2635" for this suite. +Jun 6 16:43:02.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:43:02.500: INFO: namespace pod-network-test-2635 deletion completed in 24.239708717s + +• [SLOW TEST:48.940 seconds] +[sig-network] Networking +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:43:02.504: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W0606 16:43:08.659091 15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 6 16:43:08.659: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:43:08.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9249" for this suite. +Jun 6 16:43:14.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:43:14.950: INFO: namespace gc-9249 deletion completed in 6.285000974s + +• [SLOW TEST:12.447 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:43:14.951: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-703 +Jun 6 16:43:19.069: INFO: Started pod liveness-exec in namespace container-probe-703 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 6 16:43:19.077: INFO: Initial restart count of pod liveness-exec is 0 +Jun 6 16:44:11.324: INFO: Restart count of pod container-probe-703/liveness-exec is now 1 (52.247119157s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:44:11.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-703" for this suite. +Jun 6 16:44:17.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:44:17.620: INFO: namespace container-probe-703 deletion completed in 6.263431883s + +• [SLOW TEST:62.669 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:44:17.621: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 6 16:44:17.747: INFO: Waiting up to 5m0s for pod "pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-7011" to be "success or failure" +Jun 6 16:44:17.757: INFO: Pod "pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.798124ms +Jun 6 16:44:19.769: INFO: Pod "pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021304269s +Jun 6 16:44:21.776: INFO: Pod "pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028745029s +STEP: Saw pod success +Jun 6 16:44:21.776: INFO: Pod "pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:44:21.783: INFO: Trying to get logs from node cncf-1 pod pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:44:21.836: INFO: Waiting for pod pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:44:21.844: INFO: Pod pod-5358add4-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:44:21.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7011" for this suite. +Jun 6 16:44:27.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:44:28.148: INFO: namespace emptydir-7011 deletion completed in 6.295620204s + +• [SLOW TEST:10.527 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:44:28.149: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1619 +[It] should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 6 16:44:28.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5315' +Jun 6 16:44:28.428: INFO: stderr: "" +Jun 6 16:44:28.428: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod is running +STEP: verifying the pod e2e-test-nginx-pod was created +Jun 6 16:44:33.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pod e2e-test-nginx-pod --namespace=kubectl-5315 -o json' +Jun 6 16:44:33.614: INFO: stderr: "" +Jun 6 16:44:33.614: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"10.2.0.191/32\"\n },\n \"creationTimestamp\": \"2019-06-06T16:44:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-5315\",\n \"resourceVersion\": \"3959619876\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5315/pods/e2e-test-nginx-pod\",\n \"uid\": \"59b5c2bb-887a-11e9-9995-4ad9032ea524\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2v5fp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"cncf-2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2v5fp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2v5fp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-06T16:44:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-06T16:44:31Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-06T16:44:31Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-06T16:44:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://85e5e8ae424c05252f9f6dcfb6a861d96870ee233017af8afd5133b5212b5bb5\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-06-06T16:44:30Z\"\n }\n }\n }\n ],\n \"hostIP\": \"51.68.41.114\",\n \"phase\": \"Running\",\n \"podIP\": \"10.2.0.191\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-06-06T16:44:28Z\"\n }\n}\n" +STEP: replace the image in the pod +Jun 6 16:44:33.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 replace -f - --namespace=kubectl-5315' +Jun 6 16:44:33.867: INFO: stderr: "" +Jun 6 16:44:33.867: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" +STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 +[AfterEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1624 +Jun 6 16:44:33.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete pods e2e-test-nginx-pod --namespace=kubectl-5315' +Jun 6 16:44:36.932: INFO: stderr: "" +Jun 6 16:44:36.932: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:44:36.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5315" for this suite. +Jun 6 16:44:42.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:44:43.243: INFO: namespace kubectl-5315 deletion completed in 6.303364182s + +• [SLOW TEST:15.094 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl replace + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:44:43.244: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-62a06df0-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating configMap with name cm-test-opt-upd-62a06e65-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-62a06df0-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Updating configmap cm-test-opt-upd-62a06e65-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating configMap with name cm-test-opt-create-62a06eaa-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:44:53.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8792" for this suite. +Jun 6 16:45:15.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:45:15.896: INFO: namespace configmap-8792 deletion completed in 22.263819552s + +• [SLOW TEST:32.652 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:45:15.896: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:45:20.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-736" for this suite. +Jun 6 16:46:00.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:46:00.322: INFO: namespace kubelet-test-736 deletion completed in 40.261286977s + +• [SLOW TEST:44.426 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox Pod with hostAliases + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl expose + should create services for rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:46:00.323: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create services for rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating Redis RC +Jun 6 16:46:00.424: INFO: namespace kubectl-8005 +Jun 6 16:46:00.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-8005' +Jun 6 16:46:00.654: INFO: stderr: "" +Jun 6 16:46:00.655: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 6 16:46:01.665: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:46:01.665: INFO: Found 0 / 1 +Jun 6 16:46:02.664: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:46:02.664: INFO: Found 0 / 1 +Jun 6 16:46:03.664: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:46:03.664: INFO: Found 1 / 1 +Jun 6 16:46:03.664: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 6 16:46:03.671: INFO: Selector matched 1 pods for map[app:redis] +Jun 6 16:46:03.672: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 6 16:46:03.672: INFO: wait on redis-master startup in kubectl-8005 +Jun 6 16:46:03.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 logs redis-master-xfrsm redis-master --namespace=kubectl-8005' +Jun 6 16:46:03.997: INFO: stderr: "" +Jun 6 16:46:03.997: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 Jun 16:46:02.773 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jun 16:46:02.773 # Server started, Redis version 3.2.12\n1:M 06 Jun 16:46:02.773 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jun 16:46:02.773 * The server is now ready to accept connections on port 6379\n" +STEP: exposing RC +Jun 6 16:46:03.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8005' +Jun 6 16:46:04.179: INFO: stderr: "" +Jun 6 16:46:04.179: INFO: stdout: "service/rm2 exposed\n" +Jun 6 16:46:04.188: INFO: Service rm2 in namespace kubectl-8005 found. +STEP: exposing service +Jun 6 16:46:06.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8005' +Jun 6 16:46:06.359: INFO: stderr: "" +Jun 6 16:46:06.359: INFO: stdout: "service/rm3 exposed\n" +Jun 6 16:46:06.370: INFO: Service rm3 in namespace kubectl-8005 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:46:08.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8005" for this suite. +Jun 6 16:46:30.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:46:30.668: INFO: namespace kubectl-8005 deletion completed in 22.276637976s + +• [SLOW TEST:30.345 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl expose + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create services for rc [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:46:30.669: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-projected-tjjx +STEP: Creating a pod to test atomic-volume-subpath +Jun 6 16:46:30.826: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tjjx" in namespace "subpath-6090" to be "success or failure" +Jun 6 16:46:30.833: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483384ms +Jun 6 16:46:32.841: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014785539s +Jun 6 16:46:34.850: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 4.023608222s +Jun 6 16:46:36.858: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 6.031540605s +Jun 6 16:46:38.866: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.039405336s +Jun 6 16:46:40.873: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.047161827s +Jun 6 16:46:42.882: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.055487665s +Jun 6 16:46:44.890: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.063953828s +Jun 6 16:46:46.909: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.083210759s +Jun 6 16:46:48.919: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.092553992s +Jun 6 16:46:50.928: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.10227511s +Jun 6 16:46:52.938: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.111296479s +Jun 6 16:46:54.946: INFO: Pod "pod-subpath-test-projected-tjjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.12015697s +STEP: Saw pod success +Jun 6 16:46:54.947: INFO: Pod "pod-subpath-test-projected-tjjx" satisfied condition "success or failure" +Jun 6 16:46:54.954: INFO: Trying to get logs from node cncf-2 pod pod-subpath-test-projected-tjjx container test-container-subpath-projected-tjjx: +STEP: delete the pod +Jun 6 16:46:55.002: INFO: Waiting for pod pod-subpath-test-projected-tjjx to disappear +Jun 6 16:46:55.010: INFO: Pod pod-subpath-test-projected-tjjx no longer exists +STEP: Deleting pod pod-subpath-test-projected-tjjx +Jun 6 16:46:55.010: INFO: Deleting pod "pod-subpath-test-projected-tjjx" in namespace "subpath-6090" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:46:55.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6090" for this suite. +Jun 6 16:47:01.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:47:01.284: INFO: namespace subpath-6090 deletion completed in 6.254684529s + +• [SLOW TEST:30.615 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run deployment + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:47:01.284: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1455 +[It] should create a deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 6 16:47:01.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=kubectl-2331' +Jun 6 16:47:01.537: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 6 16:47:01.537: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" +STEP: verifying the deployment e2e-test-nginx-deployment was created +STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created +[AfterEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 +Jun 6 16:47:03.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete deployment e2e-test-nginx-deployment --namespace=kubectl-2331' +Jun 6 16:47:03.711: INFO: stderr: "" +Jun 6 16:47:03.711: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:47:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2331" for this suite. +Jun 6 16:47:09.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:47:09.994: INFO: namespace kubectl-2331 deletion completed in 6.273787687s + +• [SLOW TEST:8.710 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:47:09.995: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-ba15930e-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:47:10.118: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-8625" to be "success or failure" +Jun 6 16:47:10.128: INFO: Pod "pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.571165ms +Jun 6 16:47:12.138: INFO: Pod "pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019378876s +Jun 6 16:47:14.149: INFO: Pod "pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030442161s +STEP: Saw pod success +Jun 6 16:47:14.149: INFO: Pod "pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:47:14.157: INFO: Trying to get logs from node cncf-2 pod pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6 container projected-secret-volume-test: +STEP: delete the pod +Jun 6 16:47:14.198: INFO: Waiting for pod pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:47:14.207: INFO: Pod pod-projected-secrets-ba171652-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:47:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8625" for this suite. +Jun 6 16:47:20.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:47:20.484: INFO: namespace projected-8625 deletion completed in 6.267559842s + +• [SLOW TEST:10.489 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:47:20.485: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-downwardapi-bpl9 +STEP: Creating a pod to test atomic-volume-subpath +Jun 6 16:47:20.626: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bpl9" in namespace "subpath-1399" to be "success or failure" +Jun 6 16:47:20.635: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.051885ms +Jun 6 16:47:22.646: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019984947s +Jun 6 16:47:24.656: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 4.029907515s +Jun 6 16:47:26.665: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 6.039152604s +Jun 6 16:47:28.680: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 8.053768252s +Jun 6 16:47:30.690: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 10.063732293s +Jun 6 16:47:32.698: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 12.072416214s +Jun 6 16:47:34.707: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 14.080887066s +Jun 6 16:47:36.715: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 16.089313136s +Jun 6 16:47:38.724: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 18.097489516s +Jun 6 16:47:40.732: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 20.106439375s +Jun 6 16:47:42.738: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Running", Reason="", readiness=true. Elapsed: 22.112388258s +Jun 6 16:47:44.747: INFO: Pod "pod-subpath-test-downwardapi-bpl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.12127408s +STEP: Saw pod success +Jun 6 16:47:44.748: INFO: Pod "pod-subpath-test-downwardapi-bpl9" satisfied condition "success or failure" +Jun 6 16:47:44.753: INFO: Trying to get logs from node cncf-1 pod pod-subpath-test-downwardapi-bpl9 container test-container-subpath-downwardapi-bpl9: +STEP: delete the pod +Jun 6 16:47:44.797: INFO: Waiting for pod pod-subpath-test-downwardapi-bpl9 to disappear +Jun 6 16:47:44.803: INFO: Pod pod-subpath-test-downwardapi-bpl9 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-bpl9 +Jun 6 16:47:44.803: INFO: Deleting pod "pod-subpath-test-downwardapi-bpl9" in namespace "subpath-1399" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:47:44.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1399" for this suite. +Jun 6 16:47:50.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:47:51.102: INFO: namespace subpath-1399 deletion completed in 6.285868043s + +• [SLOW TEST:30.617 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:47:51.102: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-d2986837-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:47:51.242: INFO: Waiting up to 5m0s for pod "pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-8256" to be "success or failure" +Jun 6 16:47:51.250: INFO: Pod "pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.662821ms +Jun 6 16:47:53.259: INFO: Pod "pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01724354s +STEP: Saw pod success +Jun 6 16:47:53.259: INFO: Pod "pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:47:53.266: INFO: Trying to get logs from node cncf-2 pod pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:47:53.304: INFO: Waiting for pod pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:47:53.310: INFO: Pod pod-configmaps-d29a68a8-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:47:53.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8256" for this suite. +Jun 6 16:47:59.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:47:59.571: INFO: namespace configmap-8256 deletion completed in 6.255041306s + +• [SLOW TEST:8.469 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:47:59.573: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +W0606 16:48:09.747597 15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 6 16:48:09.747: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:48:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4329" for this suite. +Jun 6 16:48:15.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:48:16.020: INFO: namespace gc-4329 deletion completed in 6.265083413s + +• [SLOW TEST:16.447 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should delete pods created by rc when not orphaning [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:48:16.020: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-e172a1ac-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:48:16.156: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-5850" to be "success or failure" +Jun 6 16:48:16.168: INFO: Pod "pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.61009ms +Jun 6 16:48:18.176: INFO: Pod "pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020239234s +Jun 6 16:48:20.185: INFO: Pod "pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029215111s +STEP: Saw pod success +Jun 6 16:48:20.185: INFO: Pod "pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:48:20.194: INFO: Trying to get logs from node cncf-1 pod pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: +STEP: delete the pod +Jun 6 16:48:20.242: INFO: Waiting for pod pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:48:20.248: INFO: Pod pod-projected-configmaps-e1745ee2-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:48:20.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5850" for this suite. +Jun 6 16:48:26.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:48:26.490: INFO: namespace projected-5850 deletion completed in 6.233297764s + +• [SLOW TEST:10.470 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:48:26.492: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test substitution in container's args +Jun 6 16:48:26.583: INFO: Waiting up to 5m0s for pod "var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "var-expansion-2615" to be "success or failure" +Jun 6 16:48:26.589: INFO: Pod "var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.992861ms +Jun 6 16:48:28.596: INFO: Pod "var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012585818s +Jun 6 16:48:30.604: INFO: Pod "var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021027262s +STEP: Saw pod success +Jun 6 16:48:30.604: INFO: Pod "var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:48:30.611: INFO: Trying to get logs from node cncf-2 pod var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 16:48:30.643: INFO: Waiting for pod var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:48:30.650: INFO: Pod var-expansion-e7aba01e-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:48:30.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2615" for this suite. +Jun 6 16:48:36.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:48:36.926: INFO: namespace var-expansion-2615 deletion completed in 6.269013136s + +• [SLOW TEST:10.434 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:48:36.927: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jun 6 16:48:37.037: INFO: Waiting up to 5m0s for pod "pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-9927" to be "success or failure" +Jun 6 16:48:37.046: INFO: Pod "pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.62859ms +Jun 6 16:48:39.054: INFO: Pod "pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017546191s +Jun 6 16:48:41.062: INFO: Pod "pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025433152s +STEP: Saw pod success +Jun 6 16:48:41.062: INFO: Pod "pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:48:41.068: INFO: Trying to get logs from node cncf-1 pod pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:48:41.096: INFO: Waiting for pod pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:48:41.101: INFO: Pod pod-ede607eb-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:48:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9927" for this suite. +Jun 6 16:48:47.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:48:47.394: INFO: namespace emptydir-9927 deletion completed in 6.286347199s + +• [SLOW TEST:10.468 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:48:47.394: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override all +Jun 6 16:48:47.509: INFO: Waiting up to 5m0s for pod "client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "containers-4994" to be "success or failure" +Jun 6 16:48:47.517: INFO: Pod "client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200377ms +Jun 6 16:48:49.527: INFO: Pod "client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018301795s +Jun 6 16:48:51.536: INFO: Pod "client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027391375s +STEP: Saw pod success +Jun 6 16:48:51.536: INFO: Pod "client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:48:51.544: INFO: Trying to get logs from node cncf-2 pod client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:48:51.586: INFO: Waiting for pod client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:48:51.592: INFO: Pod client-containers-f42403b2-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:48:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-4994" for this suite. +Jun 6 16:48:57.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:48:57.886: INFO: namespace containers-4994 deletion completed in 6.283660519s + +• [SLOW TEST:10.492 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-api-machinery] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:48:57.886: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating secret secrets-5556/secret-test-fa68879f-887a-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:48:58.032: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-5556" to be "success or failure" +Jun 6 16:48:58.046: INFO: Pod "pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.782579ms +Jun 6 16:49:00.053: INFO: Pod "pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021240751s +Jun 6 16:49:02.352: INFO: Pod "pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319798774s +STEP: Saw pod success +Jun 6 16:49:02.352: INFO: Pod "pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:49:02.359: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6 container env-test: +STEP: delete the pod +Jun 6 16:49:02.402: INFO: Waiting for pod pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:49:02.408: INFO: Pod pod-configmaps-fa6a2375-887a-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:49:02.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5556" for this suite. +Jun 6 16:49:08.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:49:08.681: INFO: namespace secrets-5556 deletion completed in 6.264056015s + +• [SLOW TEST:10.795 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:49:08.682: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:49:13.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8184" for this suite. +Jun 6 16:49:35.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:49:36.126: INFO: namespace replication-controller-8184 deletion completed in 22.266544221s + +• [SLOW TEST:27.444 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:49:36.127: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-5877 +Jun 6 16:49:40.411: INFO: Started pod liveness-http in namespace container-probe-5877 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 6 16:49:40.419: INFO: Initial restart count of pod liveness-http is 0 +Jun 6 16:50:02.531: INFO: Restart count of pod container-probe-5877/liveness-http is now 1 (22.111771885s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:50:02.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5877" for this suite. +Jun 6 16:50:08.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:50:09.129: INFO: namespace container-probe-5877 deletion completed in 6.553488281s + +• [SLOW TEST:33.002 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:50:09.129: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Jun 6 16:50:09.835: INFO: Pod name wrapped-volume-race-25339114-887b-11e9-b3bf-0e7bbe1a64f6: Found 0 pods out of 5 +Jun 6 16:50:14.851: INFO: Pod name wrapped-volume-race-25339114-887b-11e9-b3bf-0e7bbe1a64f6: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-25339114-887b-11e9-b3bf-0e7bbe1a64f6 in namespace emptydir-wrapper-4762, will wait for the garbage collector to delete the pods +Jun 6 16:50:33.013: INFO: Deleting ReplicationController wrapped-volume-race-25339114-887b-11e9-b3bf-0e7bbe1a64f6 took: 13.701502ms +Jun 6 16:50:33.413: INFO: Terminating ReplicationController wrapped-volume-race-25339114-887b-11e9-b3bf-0e7bbe1a64f6 pods took: 400.293156ms +STEP: Creating RC which spawns configmap-volume pods +Jun 6 16:51:20.954: INFO: Pod name wrapped-volume-race-4f96b882-887b-11e9-b3bf-0e7bbe1a64f6: Found 0 pods out of 5 +Jun 6 16:51:25.967: INFO: Pod name wrapped-volume-race-4f96b882-887b-11e9-b3bf-0e7bbe1a64f6: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-4f96b882-887b-11e9-b3bf-0e7bbe1a64f6 in namespace emptydir-wrapper-4762, will wait for the garbage collector to delete the pods +Jun 6 16:51:42.140: INFO: Deleting ReplicationController wrapped-volume-race-4f96b882-887b-11e9-b3bf-0e7bbe1a64f6 took: 18.785424ms +Jun 6 16:51:42.542: INFO: Terminating ReplicationController wrapped-volume-race-4f96b882-887b-11e9-b3bf-0e7bbe1a64f6 pods took: 401.271416ms +STEP: Creating RC which spawns configmap-volume pods +Jun 6 16:52:17.683: INFO: Pod name wrapped-volume-race-7166bad0-887b-11e9-b3bf-0e7bbe1a64f6: Found 0 pods out of 5 +Jun 6 16:52:22.695: INFO: Pod name wrapped-volume-race-7166bad0-887b-11e9-b3bf-0e7bbe1a64f6: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-7166bad0-887b-11e9-b3bf-0e7bbe1a64f6 in namespace emptydir-wrapper-4762, will wait for the garbage collector to delete the pods +Jun 6 16:52:40.854: INFO: Deleting ReplicationController wrapped-volume-race-7166bad0-887b-11e9-b3bf-0e7bbe1a64f6 took: 17.746566ms +Jun 6 16:52:41.256: INFO: Terminating ReplicationController wrapped-volume-race-7166bad0-887b-11e9-b3bf-0e7bbe1a64f6 pods took: 401.654301ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:53:22.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-4762" for this suite. +Jun 6 16:53:30.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:53:31.039: INFO: namespace emptydir-wrapper-4762 deletion completed in 8.238373394s + +• [SLOW TEST:201.910 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:53:31.039: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-9d38e8ae-887b-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating configMap with name cm-test-opt-upd-9d38e931-887b-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-9d38e8ae-887b-11e9-b3bf-0e7bbe1a64f6 +STEP: Updating configmap cm-test-opt-upd-9d38e931-887b-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating configMap with name cm-test-opt-create-9d38ea6d-887b-11e9-b3bf-0e7bbe1a64f6 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:54:50.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1758" for this suite. +Jun 6 16:55:12.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:55:12.628: INFO: namespace projected-1758 deletion completed in 22.249370225s + +• [SLOW TEST:101.589 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should scale a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:55:12.629: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should scale a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a replication controller +Jun 6 16:55:12.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-9549' +Jun 6 16:55:12.980: INFO: stderr: "" +Jun 6 16:55:12.981: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 6 16:55:12.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:13.127: INFO: stderr: "" +Jun 6 16:55:13.128: INFO: stdout: "update-demo-nautilus-5b2n8 update-demo-nautilus-msj6j " +Jun 6 16:55:13.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5b2n8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:13.253: INFO: stderr: "" +Jun 6 16:55:13.253: INFO: stdout: "" +Jun 6 16:55:13.253: INFO: update-demo-nautilus-5b2n8 is created but not running +Jun 6 16:55:18.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:18.397: INFO: stderr: "" +Jun 6 16:55:18.397: INFO: stdout: "update-demo-nautilus-5b2n8 update-demo-nautilus-msj6j " +Jun 6 16:55:18.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5b2n8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:18.508: INFO: stderr: "" +Jun 6 16:55:18.508: INFO: stdout: "true" +Jun 6 16:55:18.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5b2n8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:18.628: INFO: stderr: "" +Jun 6 16:55:18.628: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:18.628: INFO: validating pod update-demo-nautilus-5b2n8 +Jun 6 16:55:18.639: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:18.639: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:18.639: INFO: update-demo-nautilus-5b2n8 is verified up and running +Jun 6 16:55:18.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:18.759: INFO: stderr: "" +Jun 6 16:55:18.759: INFO: stdout: "true" +Jun 6 16:55:18.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:18.883: INFO: stderr: "" +Jun 6 16:55:18.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:18.883: INFO: validating pod update-demo-nautilus-msj6j +Jun 6 16:55:18.893: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:18.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:18.893: INFO: update-demo-nautilus-msj6j is verified up and running +STEP: scaling down the replication controller +Jun 6 16:55:18.897: INFO: scanned /root for discovery docs: +Jun 6 16:55:18.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9549' +Jun 6 16:55:20.656: INFO: stderr: "" +Jun 6 16:55:20.656: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 6 16:55:20.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:20.775: INFO: stderr: "" +Jun 6 16:55:20.775: INFO: stdout: "update-demo-nautilus-5b2n8 update-demo-nautilus-msj6j " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jun 6 16:55:25.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:25.913: INFO: stderr: "" +Jun 6 16:55:25.913: INFO: stdout: "update-demo-nautilus-msj6j " +Jun 6 16:55:25.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:26.036: INFO: stderr: "" +Jun 6 16:55:26.036: INFO: stdout: "true" +Jun 6 16:55:26.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:26.165: INFO: stderr: "" +Jun 6 16:55:26.165: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:26.166: INFO: validating pod update-demo-nautilus-msj6j +Jun 6 16:55:26.176: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:26.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:26.176: INFO: update-demo-nautilus-msj6j is verified up and running +STEP: scaling up the replication controller +Jun 6 16:55:26.179: INFO: scanned /root for discovery docs: +Jun 6 16:55:26.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9549' +Jun 6 16:55:27.369: INFO: stderr: "" +Jun 6 16:55:27.369: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 6 16:55:27.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:27.501: INFO: stderr: "" +Jun 6 16:55:27.501: INFO: stdout: "update-demo-nautilus-msj6j update-demo-nautilus-vzvbx " +Jun 6 16:55:27.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:27.650: INFO: stderr: "" +Jun 6 16:55:27.650: INFO: stdout: "true" +Jun 6 16:55:27.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:27.817: INFO: stderr: "" +Jun 6 16:55:27.817: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:27.817: INFO: validating pod update-demo-nautilus-msj6j +Jun 6 16:55:27.830: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:27.830: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:27.830: INFO: update-demo-nautilus-msj6j is verified up and running +Jun 6 16:55:27.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-vzvbx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:27.961: INFO: stderr: "" +Jun 6 16:55:27.961: INFO: stdout: "" +Jun 6 16:55:27.961: INFO: update-demo-nautilus-vzvbx is created but not running +Jun 6 16:55:32.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9549' +Jun 6 16:55:33.127: INFO: stderr: "" +Jun 6 16:55:33.127: INFO: stdout: "update-demo-nautilus-msj6j update-demo-nautilus-vzvbx " +Jun 6 16:55:33.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:33.253: INFO: stderr: "" +Jun 6 16:55:33.253: INFO: stdout: "true" +Jun 6 16:55:33.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-msj6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:33.389: INFO: stderr: "" +Jun 6 16:55:33.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:33.389: INFO: validating pod update-demo-nautilus-msj6j +Jun 6 16:55:33.396: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:33.396: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:33.396: INFO: update-demo-nautilus-msj6j is verified up and running +Jun 6 16:55:33.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-vzvbx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:33.516: INFO: stderr: "" +Jun 6 16:55:33.516: INFO: stdout: "true" +Jun 6 16:55:33.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-vzvbx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9549' +Jun 6 16:55:33.648: INFO: stderr: "" +Jun 6 16:55:33.648: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 6 16:55:33.648: INFO: validating pod update-demo-nautilus-vzvbx +Jun 6 16:55:33.660: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 6 16:55:33.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 6 16:55:33.660: INFO: update-demo-nautilus-vzvbx is verified up and running +STEP: using delete to clean up resources +Jun 6 16:55:33.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-9549' +Jun 6 16:55:33.799: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:55:33.799: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 6 16:55:33.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9549' +Jun 6 16:55:33.943: INFO: stderr: "No resources found.\n" +Jun 6 16:55:33.943: INFO: stdout: "" +Jun 6 16:55:33.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=update-demo --namespace=kubectl-9549 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 16:55:34.090: INFO: stderr: "" +Jun 6 16:55:34.090: INFO: stdout: "update-demo-nautilus-msj6j\nupdate-demo-nautilus-vzvbx\n" +Jun 6 16:55:34.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9549' +Jun 6 16:55:34.754: INFO: stderr: "No resources found.\n" +Jun 6 16:55:34.754: INFO: stdout: "" +Jun 6 16:55:34.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=update-demo --namespace=kubectl-9549 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 16:55:34.897: INFO: stderr: "" +Jun 6 16:55:34.897: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:55:34.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9549" for this suite. +Jun 6 16:55:56.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:55:57.145: INFO: namespace kubectl-9549 deletion completed in 22.237766605s + +• [SLOW TEST:44.517 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should scale a replication controller [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:55:57.146: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-642.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-642.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 6 16:56:03.399: INFO: DNS probes using dns-642/dns-test-f44bb6f3-887b-11e9-b3bf-0e7bbe1a64f6 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:56:03.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-642" for this suite. +Jun 6 16:56:09.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:56:09.665: INFO: namespace dns-642 deletion completed in 6.241139034s + +• [SLOW TEST:12.519 seconds] +[sig-network] DNS +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:56:09.667: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 6 16:56:14.329: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fbc0ca73-887b-11e9-b3bf-0e7bbe1a64f6" +Jun 6 16:56:14.329: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fbc0ca73-887b-11e9-b3bf-0e7bbe1a64f6" in namespace "pods-556" to be "terminated due to deadline exceeded" +Jun 6 16:56:14.336: INFO: Pod "pod-update-activedeadlineseconds-fbc0ca73-887b-11e9-b3bf-0e7bbe1a64f6": Phase="Running", Reason="", readiness=true. Elapsed: 7.350885ms +Jun 6 16:56:16.346: INFO: Pod "pod-update-activedeadlineseconds-fbc0ca73-887b-11e9-b3bf-0e7bbe1a64f6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016738288s +Jun 6 16:56:16.346: INFO: Pod "pod-update-activedeadlineseconds-fbc0ca73-887b-11e9-b3bf-0e7bbe1a64f6" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:56:16.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-556" for this suite. +Jun 6 16:56:24.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:56:24.647: INFO: namespace pods-556 deletion completed in 8.29182534s + +• [SLOW TEST:14.980 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:56:24.647: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:56:24.755: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:56:29.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6917" for this suite. +Jun 6 16:57:13.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:57:13.305: INFO: namespace pods-6917 deletion completed in 44.282432969s + +• [SLOW TEST:48.658 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:57:13.306: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting the proxy server +Jun 6 16:57:13.417: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-489975799 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:57:13.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7438" for this suite. +Jun 6 16:57:19.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:57:19.809: INFO: namespace kubectl-7438 deletion completed in 6.266489134s + +• [SLOW TEST:6.503 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Proxy server + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:57:19.812: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-map-2590f1aa-887c-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:57:19.935: INFO: Waiting up to 5m0s for pod "pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-2871" to be "success or failure" +Jun 6 16:57:19.945: INFO: Pod "pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742612ms +Jun 6 16:57:21.952: INFO: Pod "pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016429216s +Jun 6 16:57:23.961: INFO: Pod "pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025634513s +STEP: Saw pod success +Jun 6 16:57:23.961: INFO: Pod "pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:57:23.969: INFO: Trying to get logs from node cncf-1 pod pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 16:57:24.012: INFO: Waiting for pod pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:57:24.017: INFO: Pod pod-secrets-25925dd0-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:57:24.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2871" for this suite. +Jun 6 16:57:30.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:57:30.310: INFO: namespace secrets-2871 deletion completed in 6.287188837s + +• [SLOW TEST:10.498 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:57:30.311: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 16:57:30.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-1061" to be "success or failure" +Jun 6 16:57:30.433: INFO: Pod "downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029727ms +Jun 6 16:57:32.441: INFO: Pod "downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015478214s +Jun 6 16:57:34.450: INFO: Pod "downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024658405s +STEP: Saw pod success +Jun 6 16:57:34.450: INFO: Pod "downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:57:34.458: INFO: Trying to get logs from node cncf-1 pod downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 16:57:34.503: INFO: Waiting for pod downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:57:34.514: INFO: Pod downwardapi-volume-2bd2d98f-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:57:34.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1061" for this suite. +Jun 6 16:57:40.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:57:40.813: INFO: namespace downward-api-1061 deletion completed in 6.286924608s + +• [SLOW TEST:10.501 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:57:40.814: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 16:57:40.977: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Jun 6 16:57:41.003: INFO: Number of nodes with available pods: 0 +Jun 6 16:57:41.003: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:57:42.021: INFO: Number of nodes with available pods: 0 +Jun 6 16:57:42.021: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:57:43.241: INFO: Number of nodes with available pods: 0 +Jun 6 16:57:43.241: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 16:57:44.023: INFO: Number of nodes with available pods: 2 +Jun 6 16:57:44.023: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Jun 6 16:57:44.100: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:44.100: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:45.115: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:45.115: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:46.116: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:46.116: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:47.118: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:47.118: INFO: Pod daemon-set-k46d5 is not available +Jun 6 16:57:47.118: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:48.121: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:48.121: INFO: Pod daemon-set-k46d5 is not available +Jun 6 16:57:48.121: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:49.117: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:49.117: INFO: Pod daemon-set-k46d5 is not available +Jun 6 16:57:49.117: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:50.151: INFO: Wrong image for pod: daemon-set-k46d5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:50.151: INFO: Pod daemon-set-k46d5 is not available +Jun 6 16:57:50.151: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:51.115: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:51.115: INFO: Pod daemon-set-qzq5x is not available +Jun 6 16:57:52.116: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:52.116: INFO: Pod daemon-set-qzq5x is not available +Jun 6 16:57:53.115: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:53.115: INFO: Pod daemon-set-qzq5x is not available +Jun 6 16:57:54.114: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:55.118: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:56.116: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:56.116: INFO: Pod daemon-set-p8bx4 is not available +Jun 6 16:57:57.116: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:57.116: INFO: Pod daemon-set-p8bx4 is not available +Jun 6 16:57:58.114: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:58.114: INFO: Pod daemon-set-p8bx4 is not available +Jun 6 16:57:59.115: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:57:59.115: INFO: Pod daemon-set-p8bx4 is not available +Jun 6 16:58:00.115: INFO: Wrong image for pod: daemon-set-p8bx4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 6 16:58:00.115: INFO: Pod daemon-set-p8bx4 is not available +Jun 6 16:58:01.123: INFO: Pod daemon-set-hwn4p is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Jun 6 16:58:01.147: INFO: Number of nodes with available pods: 1 +Jun 6 16:58:01.147: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 16:58:02.165: INFO: Number of nodes with available pods: 1 +Jun 6 16:58:02.165: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 16:58:03.171: INFO: Number of nodes with available pods: 1 +Jun 6 16:58:03.171: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 16:58:04.170: INFO: Number of nodes with available pods: 2 +Jun 6 16:58:04.170: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6940, will wait for the garbage collector to delete the pods +Jun 6 16:58:04.284: INFO: Deleting DaemonSet.extensions daemon-set took: 18.204399ms +Jun 6 16:58:04.684: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.297579ms +Jun 6 16:58:10.591: INFO: Number of nodes with available pods: 0 +Jun 6 16:58:10.591: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 6 16:58:10.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6940/daemonsets","resourceVersion":"3959769931"},"items":null} + +Jun 6 16:58:10.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6940/pods","resourceVersion":"3959769931"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:58:10.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6940" for this suite. +Jun 6 16:58:16.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:58:16.890: INFO: namespace daemonsets-6940 deletion completed in 6.253335001s + +• [SLOW TEST:36.076 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:58:16.891: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 6 16:58:17.016: INFO: Waiting up to 5m0s for pod "downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-6705" to be "success or failure" +Jun 6 16:58:17.023: INFO: Pod "downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.307898ms +Jun 6 16:58:19.032: INFO: Pod "downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016603341s +Jun 6 16:58:21.042: INFO: Pod "downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026328413s +STEP: Saw pod success +Jun 6 16:58:21.042: INFO: Pod "downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:58:21.049: INFO: Trying to get logs from node cncf-2 pod downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 16:58:21.085: INFO: Waiting for pod downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:58:21.090: INFO: Pod downward-api-4797d0bb-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:58:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6705" for this suite. +Jun 6 16:58:27.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:58:27.391: INFO: namespace downward-api-6705 deletion completed in 6.292346608s + +• [SLOW TEST:10.501 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:58:27.392: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-4ddb41e8-887c-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:58:27.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-9704" to be "success or failure" +Jun 6 16:58:27.544: INFO: Pod "pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.680743ms +Jun 6 16:58:29.552: INFO: Pod "pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016927816s +Jun 6 16:58:31.561: INFO: Pod "pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02636516s +STEP: Saw pod success +Jun 6 16:58:31.561: INFO: Pod "pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:58:31.569: INFO: Trying to get logs from node cncf-2 pod pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6 container configmap-volume-test: +STEP: delete the pod +Jun 6 16:58:31.614: INFO: Waiting for pod pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:58:31.619: INFO: Pod pod-configmaps-4ddcee29-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:58:31.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9704" for this suite. +Jun 6 16:58:37.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:58:37.883: INFO: namespace configmap-9704 deletion completed in 6.248300178s + +• [SLOW TEST:10.490 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:58:37.886: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test use defaults +Jun 6 16:58:38.003: INFO: Waiting up to 5m0s for pod "client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "containers-4822" to be "success or failure" +Jun 6 16:58:38.010: INFO: Pod "client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972914ms +Jun 6 16:58:40.017: INFO: Pod "client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013961756s +Jun 6 16:58:42.030: INFO: Pod "client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027000928s +STEP: Saw pod success +Jun 6 16:58:42.030: INFO: Pod "client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:58:42.037: INFO: Trying to get logs from node cncf-1 pod client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 16:58:42.085: INFO: Waiting for pod client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:58:42.089: INFO: Pod client-containers-541a6ecf-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:58:42.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-4822" for this suite. +Jun 6 16:58:48.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:58:48.378: INFO: namespace containers-4822 deletion completed in 6.282086889s + +• [SLOW TEST:10.493 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl label + should update the label on a resource [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:58:48.380: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl label + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1108 +STEP: creating the pod +Jun 6 16:58:48.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-3778' +Jun 6 16:58:48.914: INFO: stderr: "" +Jun 6 16:58:48.914: INFO: stdout: "pod/pause created\n" +Jun 6 16:58:48.914: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jun 6 16:58:48.914: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3778" to be "running and ready" +Jun 6 16:58:48.924: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.323051ms +Jun 6 16:58:50.941: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027149581s +Jun 6 16:58:52.951: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.036733378s +Jun 6 16:58:52.951: INFO: Pod "pause" satisfied condition "running and ready" +Jun 6 16:58:52.951: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: adding the label testing-label with value testing-label-value to a pod +Jun 6 16:58:52.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 label pods pause testing-label=testing-label-value --namespace=kubectl-3778' +Jun 6 16:58:53.119: INFO: stderr: "" +Jun 6 16:58:53.119: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Jun 6 16:58:53.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pod pause -L testing-label --namespace=kubectl-3778' +Jun 6 16:58:53.242: INFO: stderr: "" +Jun 6 16:58:53.242: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" +STEP: removing the label testing-label of a pod +Jun 6 16:58:53.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 label pods pause testing-label- --namespace=kubectl-3778' +Jun 6 16:58:53.406: INFO: stderr: "" +Jun 6 16:58:53.406: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Jun 6 16:58:53.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pod pause -L testing-label --namespace=kubectl-3778' +Jun 6 16:58:53.520: INFO: stderr: "" +Jun 6 16:58:53.520: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" +[AfterEach] [k8s.io] Kubectl label + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1115 +STEP: using delete to clean up resources +Jun 6 16:58:53.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete --grace-period=0 --force -f - --namespace=kubectl-3778' +Jun 6 16:58:53.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 6 16:58:53.699: INFO: stdout: "pod \"pause\" force deleted\n" +Jun 6 16:58:53.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get rc,svc -l name=pause --no-headers --namespace=kubectl-3778' +Jun 6 16:58:53.848: INFO: stderr: "No resources found.\n" +Jun 6 16:58:53.848: INFO: stdout: "" +Jun 6 16:58:53.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -l name=pause --namespace=kubectl-3778 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 6 16:58:53.965: INFO: stderr: "" +Jun 6 16:58:53.965: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:58:53.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3778" for this suite. +Jun 6 16:59:00.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:59:00.259: INFO: namespace kubectl-3778 deletion completed in 6.286809087s + +• [SLOW TEST:11.880 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl label + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should update the label on a resource [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:59:00.260: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating replication controller my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6 +Jun 6 16:59:00.380: INFO: Pod name my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6: Found 0 pods out of 1 +Jun 6 16:59:05.388: INFO: Pod name my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6: Found 1 pods out of 1 +Jun 6 16:59:05.389: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6" are running +Jun 6 16:59:05.396: INFO: Pod "my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6-x9s6f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 16:59:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 16:59:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 16:59:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 16:59:00 +0000 UTC Reason: Message:}]) +Jun 6 16:59:05.396: INFO: Trying to dial the pod +Jun 6 16:59:10.423: INFO: Controller my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6: Got expected result from replica 1 [my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6-x9s6f]: "my-hostname-basic-6170aa38-887c-11e9-b3bf-0e7bbe1a64f6-x9s6f", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:59:10.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-7810" for this suite. +Jun 6 16:59:16.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:59:16.688: INFO: namespace replication-controller-7810 deletion completed in 6.255321671s + +• [SLOW TEST:16.428 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class + should be submitted and removed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:59:16.689: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:177 +[It] should be submitted and removed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:59:16.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5086" for this suite. +Jun 6 16:59:38.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:59:39.087: INFO: namespace pods-5086 deletion completed in 22.268216883s + +• [SLOW TEST:22.398 seconds] +[k8s.io] [sig-node] Pods Extended +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be submitted and removed [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:59:39.092: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-6630/configmap-test-7894aebc-887c-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume configMaps +Jun 6 16:59:39.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "configmap-6630" to be "success or failure" +Jun 6 16:59:39.222: INFO: Pod "pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.426446ms +Jun 6 16:59:41.946: INFO: Pod "pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731249096s +Jun 6 16:59:43.956: INFO: Pod "pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.741735483s +STEP: Saw pod success +Jun 6 16:59:43.956: INFO: Pod "pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:59:43.966: INFO: Trying to get logs from node cncf-1 pod pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6 container env-test: +STEP: delete the pod +Jun 6 16:59:44.031: INFO: Waiting for pod pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:59:44.037: INFO: Pod pod-configmaps-789639f7-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:59:44.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6630" for this suite. +Jun 6 16:59:50.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 16:59:51.116: INFO: namespace configmap-6630 deletion completed in 7.072325294s + +• [SLOW TEST:12.024 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 16:59:51.119: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-7fc421d9-887c-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 16:59:51.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-5903" to be "success or failure" +Jun 6 16:59:51.278: INFO: Pod "pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061215ms +Jun 6 16:59:54.019: INFO: Pod "pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746428038s +Jun 6 16:59:56.027: INFO: Pod "pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.755151486s +STEP: Saw pod success +Jun 6 16:59:56.028: INFO: Pod "pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 16:59:56.036: INFO: Trying to get logs from node cncf-2 pod pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6 container projected-secret-volume-test: +STEP: delete the pod +Jun 6 16:59:56.081: INFO: Waiting for pod pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 16:59:56.087: INFO: Pod pod-projected-secrets-7fc61ae0-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 16:59:56.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5903" for this suite. +Jun 6 17:00:02.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:00:02.360: INFO: namespace projected-5903 deletion completed in 6.256115062s + +• [SLOW TEST:11.241 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:00:02.361: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +[It] should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating server pod server in namespace prestop-8049 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-8049 +STEP: Deleting pre-stop pod +Jun 6 17:00:15.546: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:00:15.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-8049" for this suite. +Jun 6 17:00:55.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:00:55.847: INFO: namespace prestop-8049 deletion completed in 40.279084062s + +• [SLOW TEST:53.486 seconds] +[k8s.io] [sig-node] PreStop +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:00:55.848: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:01:00.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5947" for this suite. +Jun 6 17:01:40.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:01:40.340: INFO: namespace kubelet-test-5947 deletion completed in 40.317567345s + +• [SLOW TEST:44.492 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command in a pod + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:01:40.341: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-c0dca320-887c-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 17:01:40.483: INFO: Waiting up to 5m0s for pod "pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-5380" to be "success or failure" +Jun 6 17:01:40.494: INFO: Pod "pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.34465ms +Jun 6 17:01:42.505: INFO: Pod "pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021555508s +Jun 6 17:01:44.514: INFO: Pod "pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030883073s +STEP: Saw pod success +Jun 6 17:01:44.514: INFO: Pod "pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:01:44.522: INFO: Trying to get logs from node cncf-2 pod pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 17:01:44.566: INFO: Waiting for pod pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:01:44.572: INFO: Pod pod-secrets-c0de6588-887c-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:01:44.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5380" for this suite. +Jun 6 17:01:50.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:01:50.862: INFO: namespace secrets-5380 deletion completed in 6.280964806s + +• [SLOW TEST:10.521 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:01:50.864: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-4992 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 6 17:01:51.019: INFO: Found 0 stateful pods, waiting for 3 +Jun 6 17:02:01.028: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 17:02:01.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 17:02:01.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false +Jun 6 17:02:11.029: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 17:02:11.029: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 17:02:11.029: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jun 6 17:02:11.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-4992 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 17:02:11.431: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 17:02:11.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 17:02:11.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 6 17:02:21.500: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Jun 6 17:02:31.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-4992 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 17:02:31.902: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 17:02:31.902: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 17:02:31.902: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 17:02:41.947: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +Jun 6 17:02:41.947: INFO: Waiting for Pod statefulset-4992/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 17:02:41.947: INFO: Waiting for Pod statefulset-4992/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 17:02:51.973: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +Jun 6 17:02:51.974: INFO: Waiting for Pod statefulset-4992/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 6 17:03:01.966: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +STEP: Rolling back to a previous revision +Jun 6 17:03:11.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-4992 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 6 17:03:12.347: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 6 17:03:12.347: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 6 17:03:12.347: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 6 17:03:22.407: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Jun 6 17:03:32.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 exec --namespace=statefulset-4992 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 6 17:03:32.816: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 6 17:03:32.816: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 6 17:03:32.816: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 6 17:03:42.868: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +Jun 6 17:03:42.868: INFO: Waiting for Pod statefulset-4992/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 6 17:03:42.868: INFO: Waiting for Pod statefulset-4992/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 6 17:03:52.879: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +Jun 6 17:03:52.879: INFO: Waiting for Pod statefulset-4992/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 6 17:04:02.884: INFO: Waiting for StatefulSet statefulset-4992/ss2 to complete update +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 6 17:04:12.886: INFO: Deleting all statefulset in ns statefulset-4992 +Jun 6 17:04:12.895: INFO: Scaling statefulset ss2 to 0 +Jun 6 17:04:22.940: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 6 17:04:22.950: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:04:22.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4992" for this suite. +Jun 6 17:04:29.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:04:29.197: INFO: namespace statefulset-4992 deletion completed in 6.208660112s + +• [SLOW TEST:158.333 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:04:29.199: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-2580a967-887d-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 17:04:29.325: INFO: Waiting up to 5m0s for pod "pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-7933" to be "success or failure" +Jun 6 17:04:29.334: INFO: Pod "pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177836ms +Jun 6 17:04:31.343: INFO: Pod "pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017565399s +Jun 6 17:04:33.350: INFO: Pod "pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024671419s +STEP: Saw pod success +Jun 6 17:04:33.350: INFO: Pod "pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:04:33.359: INFO: Trying to get logs from node cncf-2 pod pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6 container secret-env-test: +STEP: delete the pod +Jun 6 17:04:33.397: INFO: Waiting for pod pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:04:33.403: INFO: Pod pod-secrets-25822e73-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:04:33.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7933" for this suite. +Jun 6 17:04:39.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:04:39.687: INFO: namespace secrets-7933 deletion completed in 6.275557648s + +• [SLOW TEST:10.488 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:04:39.687: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: validating cluster-info +Jun 6 17:04:39.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 cluster-info' +Jun 6 17:04:39.922: INFO: stderr: "" +Jun 6 17:04:39.922: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.3.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://10.3.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\x1b[0;32mMetrics-server\x1b[0m is running at \x1b[0;33mhttps://10.3.0.1:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:04:39.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5231" for this suite. +Jun 6 17:04:45.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:04:46.250: INFO: namespace kubectl-5231 deletion completed in 6.317411169s + +• [SLOW TEST:6.563 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl cluster-info + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:04:46.250: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W0606 17:04:56.554307 15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +Jun 6 17:04:56.554: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:04:56.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3819" for this suite. +Jun 6 17:05:02.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:05:02.799: INFO: namespace gc-3819 deletion completed in 6.236858358s + +• [SLOW TEST:16.548 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:05:02.799: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +Jun 6 17:05:03.471: INFO: created pod pod-service-account-defaultsa +Jun 6 17:05:03.471: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jun 6 17:05:03.482: INFO: created pod pod-service-account-mountsa +Jun 6 17:05:03.483: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jun 6 17:05:03.499: INFO: created pod pod-service-account-nomountsa +Jun 6 17:05:03.499: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jun 6 17:05:03.509: INFO: created pod pod-service-account-defaultsa-mountspec +Jun 6 17:05:03.509: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jun 6 17:05:03.521: INFO: created pod pod-service-account-mountsa-mountspec +Jun 6 17:05:03.521: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jun 6 17:05:03.533: INFO: created pod pod-service-account-nomountsa-mountspec +Jun 6 17:05:03.533: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jun 6 17:05:03.553: INFO: created pod pod-service-account-defaultsa-nomountspec +Jun 6 17:05:03.554: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jun 6 17:05:03.565: INFO: created pod pod-service-account-mountsa-nomountspec +Jun 6 17:05:03.565: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jun 6 17:05:03.573: INFO: created pod pod-service-account-nomountsa-nomountspec +Jun 6 17:05:03.573: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:05:03.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1504" for this suite. +Jun 6 17:05:25.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:05:25.863: INFO: namespace svcaccounts-1504 deletion completed in 22.284091797s + +• [SLOW TEST:23.064 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:05:25.864: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 17:05:25.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-7115" to be "success or failure" +Jun 6 17:05:25.999: INFO: Pod "downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308078ms +Jun 6 17:05:28.009: INFO: Pod "downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018254189s +Jun 6 17:05:30.018: INFO: Pod "downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027485549s +STEP: Saw pod success +Jun 6 17:05:30.018: INFO: Pod "downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:05:30.026: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 17:05:30.062: INFO: Waiting for pod downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:05:30.068: INFO: Pod downwardapi-volume-47482790-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:05:30.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7115" for this suite. +Jun 6 17:05:36.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:05:36.335: INFO: namespace projected-7115 deletion completed in 6.258783069s + +• [SLOW TEST:10.471 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:05:36.336: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Jun 6 17:05:36.458: INFO: Pod name pod-release: Found 0 pods out of 1 +Jun 6 17:05:41.469: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:05:41.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6683" for this suite. +Jun 6 17:05:47.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:05:47.815: INFO: namespace replication-controller-6683 deletion completed in 6.293074346s + +• [SLOW TEST:11.479 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should release no longer matching pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:05:47.815: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Jun 6 17:05:47.987: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8949,SelfLink:/api/v1/namespaces/watch-8949/configmaps/e2e-watch-test-resource-version,UID:545d8a5a-887d-11e9-9995-4ad9032ea524,ResourceVersion:3959852534,Generation:0,CreationTimestamp:2019-06-06 17:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 6 17:05:47.987: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8949,SelfLink:/api/v1/namespaces/watch-8949/configmaps/e2e-watch-test-resource-version,UID:545d8a5a-887d-11e9-9995-4ad9032ea524,ResourceVersion:3959852535,Generation:0,CreationTimestamp:2019-06-06 17:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:05:47.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8949" for this suite. +Jun 6 17:05:54.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:05:54.267: INFO: namespace watch-8949 deletion completed in 6.270939682s + +• [SLOW TEST:6.452 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:05:54.268: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 6 17:05:59.631: INFO: Successfully updated pod "pod-update-583597c8-887d-11e9-b3bf-0e7bbe1a64f6" +STEP: verifying the updated pod is in kubernetes +Jun 6 17:05:59.645: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:05:59.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8428" for this suite. +Jun 6 17:06:21.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:06:21.936: INFO: namespace pods-8428 deletion completed in 22.285656222s + +• [SLOW TEST:27.669 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:06:21.937: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1078.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1078.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1078.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1078.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.40.3.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.3.40.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.40.3.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.3.40.78_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1078.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1078.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1078.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1078.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1078.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1078.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1078.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.40.3.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.3.40.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.40.3.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.3.40.78_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 6 17:06:28.351: INFO: DNS probes using dns-1078/dns-test-68bd9347-887d-11e9-b3bf-0e7bbe1a64f6 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:06:28.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-1078" for this suite. +Jun 6 17:06:34.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:06:34.653: INFO: namespace dns-1078 deletion completed in 6.23205147s + +• [SLOW TEST:12.717 seconds] +[sig-network] DNS +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for services [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:06:34.654: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 17:06:34.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-2185" to be "success or failure" +Jun 6 17:06:34.773: INFO: Pod "downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.89795ms +Jun 6 17:06:36.783: INFO: Pod "downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015842383s +Jun 6 17:06:39.159: INFO: Pod "downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.391926001s +STEP: Saw pod success +Jun 6 17:06:39.159: INFO: Pod "downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:06:39.175: INFO: Trying to get logs from node cncf-1 pod downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 17:06:39.240: INFO: Waiting for pod downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:06:39.247: INFO: Pod downwardapi-volume-70471189-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:06:39.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2185" for this suite. +Jun 6 17:06:45.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:06:45.509: INFO: namespace downward-api-2185 deletion completed in 6.255129576s + +• [SLOW TEST:10.856 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run rc + should create an rc from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:06:45.510: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run rc + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 +[It] should create an rc from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 6 17:06:45.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7907' +Jun 6 17:06:45.760: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 6 17:06:45.760: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created +STEP: confirm that you can get logs from an rc +Jun 6 17:06:45.775: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v6hqf] +Jun 6 17:06:45.775: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v6hqf" in namespace "kubectl-7907" to be "running and ready" +Jun 6 17:06:45.782: INFO: Pod "e2e-test-nginx-rc-v6hqf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806919ms +Jun 6 17:06:47.791: INFO: Pod "e2e-test-nginx-rc-v6hqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015739628s +Jun 6 17:06:49.798: INFO: Pod "e2e-test-nginx-rc-v6hqf": Phase="Running", Reason="", readiness=true. Elapsed: 4.022787804s +Jun 6 17:06:49.798: INFO: Pod "e2e-test-nginx-rc-v6hqf" satisfied condition "running and ready" +Jun 6 17:06:49.798: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v6hqf] +Jun 6 17:06:49.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 logs rc/e2e-test-nginx-rc --namespace=kubectl-7907' +Jun 6 17:06:50.012: INFO: stderr: "" +Jun 6 17:06:50.012: INFO: stdout: "" +[AfterEach] [k8s.io] Kubectl run rc + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 +Jun 6 17:06:50.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete rc e2e-test-nginx-rc --namespace=kubectl-7907' +Jun 6 17:06:50.147: INFO: stderr: "" +Jun 6 17:06:50.147: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:06:50.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7907" for this suite. +Jun 6 17:06:56.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:06:56.427: INFO: namespace kubectl-7907 deletion completed in 6.269537894s + +• [SLOW TEST:10.917 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run rc + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create an rc from an image [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:06:56.430: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 6 17:06:56.620: INFO: Number of nodes with available pods: 0 +Jun 6 17:06:56.621: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 17:06:57.644: INFO: Number of nodes with available pods: 0 +Jun 6 17:06:57.644: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 17:06:58.919: INFO: Number of nodes with available pods: 0 +Jun 6 17:06:58.919: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 17:06:59.639: INFO: Number of nodes with available pods: 0 +Jun 6 17:06:59.639: INFO: Node cncf-1 is running more than one daemon pod +Jun 6 17:07:00.639: INFO: Number of nodes with available pods: 2 +Jun 6 17:07:00.640: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Jun 6 17:07:00.696: INFO: Number of nodes with available pods: 1 +Jun 6 17:07:00.696: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 17:07:01.714: INFO: Number of nodes with available pods: 1 +Jun 6 17:07:01.714: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 17:07:02.711: INFO: Number of nodes with available pods: 1 +Jun 6 17:07:02.711: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 17:07:03.713: INFO: Number of nodes with available pods: 1 +Jun 6 17:07:03.713: INFO: Node cncf-2 is running more than one daemon pod +Jun 6 17:07:04.712: INFO: Number of nodes with available pods: 2 +Jun 6 17:07:04.712: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2000, will wait for the garbage collector to delete the pods +Jun 6 17:07:04.796: INFO: Deleting DaemonSet.extensions daemon-set took: 12.847779ms +Jun 6 17:07:04.897: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.01069ms +Jun 6 17:07:10.805: INFO: Number of nodes with available pods: 0 +Jun 6 17:07:10.805: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 6 17:07:10.811: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2000/daemonsets","resourceVersion":"3959867384"},"items":null} + +Jun 6 17:07:10.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2000/pods","resourceVersion":"3959867384"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:07:10.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2000" for this suite. +Jun 6 17:07:16.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:07:17.078: INFO: namespace daemonsets-2000 deletion completed in 6.230778572s + +• [SLOW TEST:20.649 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:07:17.081: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 17:07:17.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-1610" to be "success or failure" +Jun 6 17:07:17.199: INFO: Pod "downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.526508ms +Jun 6 17:07:19.207: INFO: Pod "downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015036694s +Jun 6 17:07:22.141: INFO: Pod "downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.949203306s +STEP: Saw pod success +Jun 6 17:07:22.141: INFO: Pod "downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:07:22.154: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 17:07:22.192: INFO: Waiting for pod downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:07:22.198: INFO: Pod downwardapi-volume-89908aac-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:07:22.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1610" for this suite. +Jun 6 17:07:28.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:07:28.467: INFO: namespace downward-api-1610 deletion completed in 6.258885844s + +• [SLOW TEST:11.387 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:07:28.468: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 17:07:28.597: INFO: Waiting up to 5m0s for pod "downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-103" to be "success or failure" +Jun 6 17:07:28.607: INFO: Pod "downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.64276ms +Jun 6 17:07:30.615: INFO: Pod "downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017758603s +Jun 6 17:07:32.626: INFO: Pod "downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028935531s +STEP: Saw pod success +Jun 6 17:07:32.626: INFO: Pod "downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:07:32.635: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 17:07:32.673: INFO: Waiting for pod downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:07:32.679: INFO: Pod downwardapi-volume-905c684e-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:07:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-103" for this suite. +Jun 6 17:07:38.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:07:39.197: INFO: namespace downward-api-103 deletion completed in 6.509345167s + +• [SLOW TEST:10.729 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:07:39.197: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 6 17:07:39.323: INFO: Waiting up to 5m0s for pod "pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-8449" to be "success or failure" +Jun 6 17:07:39.332: INFO: Pod "pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903556ms +Jun 6 17:07:41.340: INFO: Pod "pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016528936s +Jun 6 17:07:43.352: INFO: Pod "pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02808279s +STEP: Saw pod success +Jun 6 17:07:43.352: INFO: Pod "pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:07:43.361: INFO: Trying to get logs from node cncf-1 pod pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 17:07:43.408: INFO: Waiting for pod pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:07:43.415: INFO: Pod pod-96c13b39-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:07:43.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8449" for this suite. +Jun 6 17:07:49.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:07:49.688: INFO: namespace emptydir-8449 deletion completed in 6.265920052s + +• [SLOW TEST:10.491 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:07:49.688: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 17:07:49.873: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9d072a29-887d-11e9-9995-4ad9032ea524", Controller:(*bool)(0xc0033a20c2), BlockOwnerDeletion:(*bool)(0xc0033a20c3)}} +Jun 6 17:07:49.885: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9d027fdd-887d-11e9-9995-4ad9032ea524", Controller:(*bool)(0xc0032ece26), BlockOwnerDeletion:(*bool)(0xc0032ece27)}} +Jun 6 17:07:49.901: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9d0491da-887d-11e9-9995-4ad9032ea524", Controller:(*bool)(0xc0033a2292), BlockOwnerDeletion:(*bool)(0xc0033a2293)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:07:54.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3159" for this suite. +Jun 6 17:08:00.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:08:01.166: INFO: namespace gc-3159 deletion completed in 6.237904736s + +• [SLOW TEST:11.478 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:08:01.166: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override arguments +Jun 6 17:08:01.274: INFO: Waiting up to 5m0s for pod "client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "containers-6981" to be "success or failure" +Jun 6 17:08:01.283: INFO: Pod "client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486945ms +Jun 6 17:08:03.291: INFO: Pod "client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017343562s +Jun 6 17:08:05.300: INFO: Pod "client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026342486s +STEP: Saw pod success +Jun 6 17:08:05.301: INFO: Pod "client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:08:05.307: INFO: Trying to get logs from node cncf-1 pod client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 17:08:05.348: INFO: Waiting for pod client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:08:05.354: INFO: Pod client-containers-a3d78dee-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:08:05.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6981" for this suite. +Jun 6 17:08:11.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:08:11.619: INFO: namespace containers-6981 deletion completed in 6.257848406s + +• [SLOW TEST:10.454 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:08:11.620: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 6 17:08:11.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-7882" to be "success or failure" +Jun 6 17:08:11.758: INFO: Pod "downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.034818ms +Jun 6 17:08:13.766: INFO: Pod "downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015556865s +Jun 6 17:08:15.775: INFO: Pod "downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024220641s +STEP: Saw pod success +Jun 6 17:08:15.775: INFO: Pod "downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:08:15.783: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6 container client-container: +STEP: delete the pod +Jun 6 17:08:15.819: INFO: Waiting for pod downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:08:15.825: INFO: Pod downwardapi-volume-aa157d77-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:08:15.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7882" for this suite. +Jun 6 17:08:21.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:08:22.121: INFO: namespace downward-api-7882 deletion completed in 6.286969774s + +• [SLOW TEST:10.501 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:08:22.121: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jun 6 17:08:22.299: INFO: Waiting up to 5m0s for pod "pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-4321" to be "success or failure" +Jun 6 17:08:22.305: INFO: Pod "pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.484054ms +Jun 6 17:08:24.318: INFO: Pod "pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018546438s +Jun 6 17:08:26.328: INFO: Pod "pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028365977s +STEP: Saw pod success +Jun 6 17:08:26.328: INFO: Pod "pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:08:26.335: INFO: Trying to get logs from node cncf-1 pod pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6 container test-container: +STEP: delete the pod +Jun 6 17:08:26.373: INFO: Waiting for pod pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:08:26.379: INFO: Pod pod-b05f19b8-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:08:26.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4321" for this suite. +Jun 6 17:08:32.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:08:33.078: INFO: namespace emptydir-4321 deletion completed in 6.684223979s + +• [SLOW TEST:10.956 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:08:33.078: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-configmap-7nfp +STEP: Creating a pod to test atomic-volume-subpath +Jun 6 17:08:33.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7nfp" in namespace "subpath-6297" to be "success or failure" +Jun 6 17:08:33.224: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.458558ms +Jun 6 17:08:35.231: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014817457s +Jun 6 17:08:37.243: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.02623967s +Jun 6 17:08:39.251: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.034964631s +Jun 6 17:08:41.262: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.045873299s +Jun 6 17:08:43.271: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.054527256s +Jun 6 17:08:45.280: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.063392636s +Jun 6 17:08:47.288: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.07172689s +Jun 6 17:08:49.297: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.080634429s +Jun 6 17:08:51.307: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.090318837s +Jun 6 17:08:53.317: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.100325941s +Jun 6 17:08:55.325: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.108781071s +Jun 6 17:08:57.334: INFO: Pod "pod-subpath-test-configmap-7nfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117762242s +STEP: Saw pod success +Jun 6 17:08:57.334: INFO: Pod "pod-subpath-test-configmap-7nfp" satisfied condition "success or failure" +Jun 6 17:08:57.342: INFO: Trying to get logs from node cncf-2 pod pod-subpath-test-configmap-7nfp container test-container-subpath-configmap-7nfp: +STEP: delete the pod +Jun 6 17:08:57.382: INFO: Waiting for pod pod-subpath-test-configmap-7nfp to disappear +Jun 6 17:08:57.388: INFO: Pod pod-subpath-test-configmap-7nfp no longer exists +STEP: Deleting pod pod-subpath-test-configmap-7nfp +Jun 6 17:08:57.388: INFO: Deleting pod "pod-subpath-test-configmap-7nfp" in namespace "subpath-6297" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:08:57.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6297" for this suite. +Jun 6 17:09:03.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:09:03.687: INFO: namespace subpath-6297 deletion completed in 6.28632202s + +• [SLOW TEST:30.610 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:09:03.690: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 6 17:09:08.403: INFO: Successfully updated pod "labelsupdatec91d3040-887d-11e9-b3bf-0e7bbe1a64f6" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:09:10.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8556" for this suite. +Jun 6 17:09:32.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:09:32.759: INFO: namespace downward-api-8556 deletion completed in 22.294732979s + +• [SLOW TEST:29.070 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:09:32.762: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 6 17:09:32.861: INFO: Waiting up to 5m0s for pod "downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-606" to be "success or failure" +Jun 6 17:09:32.867: INFO: Pod "downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.510133ms +Jun 6 17:09:34.877: INFO: Pod "downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015663907s +Jun 6 17:09:36.888: INFO: Pod "downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026657256s +STEP: Saw pod success +Jun 6 17:09:36.888: INFO: Pod "downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:09:36.895: INFO: Trying to get logs from node cncf-2 pod downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6 container dapi-container: +STEP: delete the pod +Jun 6 17:09:36.934: INFO: Waiting for pod downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:09:36.940: INFO: Pod downward-api-da6e3908-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:09:36.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-606" for this suite. +Jun 6 17:09:42.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:09:43.257: INFO: namespace downward-api-606 deletion completed in 6.308755402s + +• [SLOW TEST:10.495 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:09:43.258: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name projected-secret-test-e0b2867a-887d-11e9-b3bf-0e7bbe1a64f6 +STEP: Creating a pod to test consume secrets +Jun 6 17:09:43.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-4844" to be "success or failure" +Jun 6 17:09:43.403: INFO: Pod "pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.492045ms +Jun 6 17:09:45.421: INFO: Pod "pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031330085s +Jun 6 17:09:47.430: INFO: Pod "pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040635427s +STEP: Saw pod success +Jun 6 17:09:47.430: INFO: Pod "pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure" +Jun 6 17:09:47.438: INFO: Trying to get logs from node cncf-1 pod pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: +STEP: delete the pod +Jun 6 17:09:47.475: INFO: Waiting for pod pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6 to disappear +Jun 6 17:09:47.479: INFO: Pod pod-projected-secrets-e0b448c9-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 6 17:09:47.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4844" for this suite. +Jun 6 17:09:53.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 6 17:09:53.742: INFO: namespace projected-4844 deletion completed in 6.255817359s + +• [SLOW TEST:10.484 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 6 17:09:53.742: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 6 17:09:53.861: INFO: (0) /api/v1/nodes/cncf-1/proxy/logs/:
+btmp
+containers/
+faillog... (200; 18.066439ms)
+Jun  6 17:09:53.873: INFO: (1) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.979742ms)
+Jun  6 17:09:53.888: INFO: (2) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 14.057728ms)
+Jun  6 17:09:53.898: INFO: (3) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.338476ms)
+Jun  6 17:09:53.907: INFO: (4) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.219399ms)
+Jun  6 17:09:53.919: INFO: (5) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.851368ms)
+Jun  6 17:09:53.930: INFO: (6) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.631613ms)
+Jun  6 17:09:53.941: INFO: (7) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.664055ms)
+Jun  6 17:09:53.952: INFO: (8) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.757013ms)
+Jun  6 17:09:53.963: INFO: (9) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.545819ms)
+Jun  6 17:09:53.977: INFO: (10) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 13.711677ms)
+Jun  6 17:09:54.775: INFO: (11) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 798.452387ms)
+Jun  6 17:09:54.789: INFO: (12) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 13.341519ms)
+Jun  6 17:09:54.800: INFO: (13) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.210061ms)
+Jun  6 17:09:54.811: INFO: (14) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.637066ms)
+Jun  6 17:09:54.822: INFO: (15) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.931036ms)
+Jun  6 17:09:54.839: INFO: (16) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 16.532859ms)
+Jun  6 17:09:54.850: INFO: (17) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.328399ms)
+Jun  6 17:09:54.873: INFO: (18) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 23.027571ms)
+Jun  6 17:09:54.898: INFO: (19) /api/v1/nodes/cncf-1/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 24.826547ms)
+[AfterEach] version v1
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:09:54.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "proxy-6117" for this suite.
+Jun  6 17:10:00.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:10:01.163: INFO: namespace proxy-6117 deletion completed in 6.242189123s
+
+• [SLOW TEST:7.421 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  version v1
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
+    should proxy logs on node using proxy subresource  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:10:01.164: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward api env vars
+Jun  6 17:10:01.284: INFO: Waiting up to 5m0s for pod "downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6" in namespace "downward-api-6212" to be "success or failure"
+Jun  6 17:10:01.289: INFO: Pod "downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497741ms
+Jun  6 17:10:03.302: INFO: Pod "downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017406575s
+Jun  6 17:10:05.310: INFO: Pod "downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025467653s
+STEP: Saw pod success
+Jun  6 17:10:05.310: INFO: Pod "downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:10:05.317: INFO: Trying to get logs from node cncf-2 pod downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6 container dapi-container: 
+STEP: delete the pod
+Jun  6 17:10:05.366: INFO: Waiting for pod downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:10:05.373: INFO: Pod downward-api-eb5f24d1-887d-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:10:05.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-6212" for this suite.
+Jun  6 17:10:12.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:10:12.560: INFO: namespace downward-api-6212 deletion completed in 7.177522915s
+
+• [SLOW TEST:11.396 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should invoke init containers on a RestartNever pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:10:12.560: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
+[It] should invoke init containers on a RestartNever pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating the pod
+Jun  6 17:10:12.672: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:10:20.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-1551" for this suite.
+Jun  6 17:10:28.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:10:28.545: INFO: namespace init-container-1551 deletion completed in 8.281462528s
+
+• [SLOW TEST:15.985 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should invoke init containers on a RestartNever pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:10:28.546: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun  6 17:10:50.440: INFO: Container started at 2019-06-06 17:10:30 +0000 UTC, pod became ready at 2019-06-06 17:10:47 +0000 UTC
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:10:50.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-299" for this suite.
+Jun  6 17:11:12.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:11:12.732: INFO: namespace container-probe-299 deletion completed in 22.277615898s
+
+• [SLOW TEST:44.186 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
+  should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:11:12.732: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
+[It] should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun  6 17:11:12.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5097'
+Jun  6 17:11:13.153: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  6 17:11:13.153: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+Jun  6 17:11:13.170: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
+STEP: rolling-update to same image controller
+Jun  6 17:11:13.182: INFO: scanned /root for discovery docs: 
+Jun  6 17:11:13.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5097'
+Jun  6 17:11:29.122: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun  6 17:11:29.122: INFO: stdout: "Created e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3\nScaling up e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
+Jun  6 17:11:29.122: INFO: stdout: "Created e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3\nScaling up e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
+STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
+Jun  6 17:11:29.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5097'
+Jun  6 17:11:29.263: INFO: stderr: ""
+Jun  6 17:11:29.263: INFO: stdout: "e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3-5g24c e2e-test-nginx-rc-v72xx "
+STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
+Jun  6 17:11:34.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5097'
+Jun  6 17:11:34.410: INFO: stderr: ""
+Jun  6 17:11:34.410: INFO: stdout: "e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3-5g24c "
+Jun  6 17:11:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3-5g24c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5097'
+Jun  6 17:11:34.535: INFO: stderr: ""
+Jun  6 17:11:34.535: INFO: stdout: "true"
+Jun  6 17:11:34.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3-5g24c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5097'
+Jun  6 17:11:34.673: INFO: stderr: ""
+Jun  6 17:11:34.673: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
+Jun  6 17:11:34.673: INFO: e2e-test-nginx-rc-e92a919864b4be72368e607f77589ce3-5g24c is verified up and running
+[AfterEach] [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
+Jun  6 17:11:34.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete rc e2e-test-nginx-rc --namespace=kubectl-5097'
+Jun  6 17:11:34.826: INFO: stderr: ""
+Jun  6 17:11:34.826: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:11:34.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-5097" for this suite.
+Jun  6 17:11:40.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:11:41.140: INFO: namespace kubectl-5097 deletion completed in 6.303450371s
+
+• [SLOW TEST:28.408 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should support rolling-update to same image  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:11:41.142: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
+[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating the pod
+Jun  6 17:11:41.262: INFO: PodSpec: initContainers in spec.initContainers
+Jun  6 17:12:33.327: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-26f922f0-887e-11e9-b3bf-0e7bbe1a64f6", GenerateName:"", Namespace:"init-container-5171", SelfLink:"/api/v1/namespaces/init-container-5171/pods/pod-init-26f922f0-887e-11e9-b3bf-0e7bbe1a64f6", UID:"26fa1bb7-887e-11e9-9995-4ad9032ea524", ResourceVersion:"3959925107", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695437901, loc:(*time.Location)(0x8a140e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"262935281"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"10.2.0.242/32"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tvwxp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002142380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvwxp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvwxp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvwxp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00296c2b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"cncf-2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00048c0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00296c340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00296c360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00296c368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00296c36c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695437901, loc:(*time.Location)(0x8a140e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695437901, loc:(*time.Location)(0x8a140e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695437901, loc:(*time.Location)(0x8a140e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695437901, loc:(*time.Location)(0x8a140e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"51.68.41.114", PodIP:"10.2.0.242", StartTime:(*v1.Time)(0xc0011de200), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00085cfc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000cba000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d4c72b57edea0b838f424110035fbf5d00d01b3e6c08e210310875be5f0825eb"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011de320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011de2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:12:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-5171" for this suite.
+Jun  6 17:12:55.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:12:55.603: INFO: namespace init-container-5171 deletion completed in 22.264829463s
+
+• [SLOW TEST:74.461 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:12:55.606: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test substitution in container's command
+Jun  6 17:12:56.768: INFO: Waiting up to 5m0s for pod "var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6" in namespace "var-expansion-1571" to be "success or failure"
+Jun  6 17:12:56.774: INFO: Pod "var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587108ms
+Jun  6 17:12:58.784: INFO: Pod "var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01650439s
+Jun  6 17:13:00.797: INFO: Pod "var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02974772s
+STEP: Saw pod success
+Jun  6 17:13:00.798: INFO: Pod "var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:13:00.812: INFO: Trying to get logs from node cncf-2 pod var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6 container dapi-container: 
+STEP: delete the pod
+Jun  6 17:13:00.884: INFO: Waiting for pod var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:13:00.900: INFO: Pod var-expansion-53f6ff8d-887e-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:13:00.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-1571" for this suite.
+Jun  6 17:13:06.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:13:07.176: INFO: namespace var-expansion-1571 deletion completed in 6.260139152s
+
+• [SLOW TEST:11.571 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run job 
+  should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:13:07.177: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1510
+[It] should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun  6 17:13:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9467'
+Jun  6 17:13:07.441: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  6 17:13:07.442: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
+STEP: verifying the job e2e-test-nginx-job was created
+[AfterEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1515
+Jun  6 17:13:07.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 delete jobs e2e-test-nginx-job --namespace=kubectl-9467'
+Jun  6 17:13:07.591: INFO: stderr: ""
+Jun  6 17:13:07.591: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:13:07.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9467" for this suite.
+Jun  6 17:13:13.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:13:13.901: INFO: namespace kubectl-9467 deletion completed in 6.29567224s
+
+• [SLOW TEST:6.724 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run job
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should create a job from an image when restart is OnFailure  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:13:13.902: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-5e421dfc-887e-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating a pod to test consume secrets
+Jun  6 17:13:14.048: INFO: Waiting up to 5m0s for pod "pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6" in namespace "secrets-7433" to be "success or failure"
+Jun  6 17:13:14.057: INFO: Pod "pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109831ms
+Jun  6 17:13:16.065: INFO: Pod "pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01638379s
+Jun  6 17:13:18.286: INFO: Pod "pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237158945s
+STEP: Saw pod success
+Jun  6 17:13:18.286: INFO: Pod "pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:13:18.305: INFO: Trying to get logs from node cncf-2 pod pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6 container secret-volume-test: 
+STEP: delete the pod
+Jun  6 17:13:18.345: INFO: Waiting for pod pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:13:18.350: INFO: Pod pod-secrets-5e440beb-887e-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:13:18.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-7433" for this suite.
+Jun  6 17:13:24.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:13:24.619: INFO: namespace secrets-7433 deletion completed in 6.2597961s
+
+• [SLOW TEST:10.717 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:13:24.620: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating service multi-endpoint-test in namespace services-7944
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7944 to expose endpoints map[]
+Jun  6 17:13:24.743: INFO: Get endpoints failed (6.717338ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Jun  6 17:13:25.751: INFO: successfully validated that service multi-endpoint-test in namespace services-7944 exposes endpoints map[] (1.014377987s elapsed)
+STEP: Creating pod pod1 in namespace services-7944
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7944 to expose endpoints map[pod1:[100]]
+Jun  6 17:13:27.814: INFO: successfully validated that service multi-endpoint-test in namespace services-7944 exposes endpoints map[pod1:[100]] (2.044025192s elapsed)
+STEP: Creating pod pod2 in namespace services-7944
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7944 to expose endpoints map[pod1:[100] pod2:[101]]
+Jun  6 17:13:29.892: INFO: successfully validated that service multi-endpoint-test in namespace services-7944 exposes endpoints map[pod1:[100] pod2:[101]] (2.05823155s elapsed)
+STEP: Deleting pod pod1 in namespace services-7944
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7944 to expose endpoints map[pod2:[101]]
+Jun  6 17:13:30.940: INFO: successfully validated that service multi-endpoint-test in namespace services-7944 exposes endpoints map[pod2:[101]] (1.035740856s elapsed)
+STEP: Deleting pod pod2 in namespace services-7944
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7944 to expose endpoints map[]
+Jun  6 17:13:32.081: INFO: successfully validated that service multi-endpoint-test in namespace services-7944 exposes endpoints map[] (1.127062452s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:13:32.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-7944" for this suite.
+Jun  6 17:13:54.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:13:54.399: INFO: namespace services-7944 deletion completed in 22.26682129s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+
+• [SLOW TEST:29.780 seconds]
+[sig-network] Services
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:13:54.400: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating the pod
+Jun  6 17:13:59.118: INFO: Successfully updated pod "annotationupdate7665a0c4-887e-11e9-b3bf-0e7bbe1a64f6"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:14:01.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2698" for this suite.
+Jun  6 17:14:23.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:14:23.469: INFO: namespace projected-2698 deletion completed in 22.297534416s
+
+• [SLOW TEST:29.069 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:14:23.471: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun  6 17:14:23.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-17" to be "success or failure"
+Jun  6 17:14:23.605: INFO: Pod "downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83843ms
+Jun  6 17:14:25.615: INFO: Pod "downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016429563s
+Jun  6 17:14:27.623: INFO: Pod "downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025211858s
+STEP: Saw pod success
+Jun  6 17:14:27.624: INFO: Pod "downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:14:27.632: INFO: Trying to get logs from node cncf-2 pod downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6 container client-container: 
+STEP: delete the pod
+Jun  6 17:14:27.672: INFO: Waiting for pod downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:14:27.678: INFO: Pod downwardapi-volume-87b89670-887e-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:14:27.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-17" for this suite.
+Jun  6 17:14:33.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:14:33.977: INFO: namespace projected-17 deletion completed in 6.290958312s
+
+• [SLOW TEST:10.507 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:14:33.978: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265
+[It] should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating the initial replication controller
+Jun  6 17:14:34.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 create -f - --namespace=kubectl-5012'
+Jun  6 17:14:34.865: INFO: stderr: ""
+Jun  6 17:14:34.865: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  6 17:14:34.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5012'
+Jun  6 17:14:35.006: INFO: stderr: ""
+Jun  6 17:14:35.006: INFO: stdout: "update-demo-nautilus-5vkqb update-demo-nautilus-kr72p "
+Jun  6 17:14:35.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vkqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:14:35.177: INFO: stderr: ""
+Jun  6 17:14:35.177: INFO: stdout: ""
+Jun  6 17:14:35.177: INFO: update-demo-nautilus-5vkqb is created but not running
+Jun  6 17:14:40.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5012'
+Jun  6 17:14:40.306: INFO: stderr: ""
+Jun  6 17:14:40.306: INFO: stdout: "update-demo-nautilus-5vkqb update-demo-nautilus-kr72p "
+Jun  6 17:14:40.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vkqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:14:40.419: INFO: stderr: ""
+Jun  6 17:14:40.419: INFO: stdout: "true"
+Jun  6 17:14:40.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-5vkqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:14:40.544: INFO: stderr: ""
+Jun  6 17:14:40.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  6 17:14:40.544: INFO: validating pod update-demo-nautilus-5vkqb
+Jun  6 17:14:40.553: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  6 17:14:40.554: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  6 17:14:40.554: INFO: update-demo-nautilus-5vkqb is verified up and running
+Jun  6 17:14:40.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-kr72p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:14:40.684: INFO: stderr: ""
+Jun  6 17:14:40.684: INFO: stdout: "true"
+Jun  6 17:14:40.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-nautilus-kr72p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:14:40.787: INFO: stderr: ""
+Jun  6 17:14:40.787: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  6 17:14:40.787: INFO: validating pod update-demo-nautilus-kr72p
+Jun  6 17:14:40.797: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  6 17:14:40.797: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  6 17:14:40.797: INFO: update-demo-nautilus-kr72p is verified up and running
+STEP: rolling-update to new replication controller
+Jun  6 17:14:40.801: INFO: scanned /root for discovery docs: 
+Jun  6 17:14:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5012'
+Jun  6 17:15:04.014: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun  6 17:15:04.014: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  6 17:15:04.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5012'
+Jun  6 17:15:04.198: INFO: stderr: ""
+Jun  6 17:15:04.198: INFO: stdout: "update-demo-kitten-nlf4p update-demo-kitten-zm7rz "
+Jun  6 17:15:04.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-kitten-nlf4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:15:04.322: INFO: stderr: ""
+Jun  6 17:15:04.322: INFO: stdout: "true"
+Jun  6 17:15:04.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-kitten-nlf4p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:15:04.461: INFO: stderr: ""
+Jun  6 17:15:04.461: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun  6 17:15:04.461: INFO: validating pod update-demo-kitten-nlf4p
+Jun  6 17:15:04.472: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun  6 17:15:04.472: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun  6 17:15:04.472: INFO: update-demo-kitten-nlf4p is verified up and running
+Jun  6 17:15:04.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-kitten-zm7rz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:15:04.609: INFO: stderr: ""
+Jun  6 17:15:04.609: INFO: stdout: "true"
+Jun  6 17:15:04.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 get pods update-demo-kitten-zm7rz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5012'
+Jun  6 17:15:04.754: INFO: stderr: ""
+Jun  6 17:15:04.754: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun  6 17:15:04.754: INFO: validating pod update-demo-kitten-zm7rz
+Jun  6 17:15:04.768: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun  6 17:15:04.769: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun  6 17:15:04.769: INFO: update-demo-kitten-zm7rz is verified up and running
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:15:04.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-5012" for this suite.
+Jun  6 17:15:28.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:15:29.014: INFO: namespace kubectl-5012 deletion completed in 24.235467628s
+
+• [SLOW TEST:55.036 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should do a rolling update of a replication controller  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:15:29.014: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating pod liveness-http in namespace container-probe-1426
+Jun  6 17:15:33.142: INFO: Started pod liveness-http in namespace container-probe-1426
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun  6 17:15:33.149: INFO: Initial restart count of pod liveness-http is 0
+Jun  6 17:15:47.218: INFO: Restart count of pod container-probe-1426/liveness-http is now 1 (14.068542469s elapsed)
+Jun  6 17:16:05.311: INFO: Restart count of pod container-probe-1426/liveness-http is now 2 (32.162037146s elapsed)
+Jun  6 17:16:24.908: INFO: Restart count of pod container-probe-1426/liveness-http is now 3 (51.758361024s elapsed)
+Jun  6 17:16:47.037: INFO: Restart count of pod container-probe-1426/liveness-http is now 4 (1m13.887907637s elapsed)
+Jun  6 17:17:47.072: INFO: Restart count of pod container-probe-1426/liveness-http is now 5 (2m13.922428243s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:17:47.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-1426" for this suite.
+Jun  6 17:17:54.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:17:54.301: INFO: namespace container-probe-1426 deletion completed in 7.186957697s
+
+• [SLOW TEST:145.286 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:17:54.301: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jun  6 17:17:54.498: INFO: Number of nodes with available pods: 0
+Jun  6 17:17:54.498: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:17:55.515: INFO: Number of nodes with available pods: 0
+Jun  6 17:17:55.515: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:17:56.517: INFO: Number of nodes with available pods: 0
+Jun  6 17:17:56.517: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:17:57.525: INFO: Number of nodes with available pods: 2
+Jun  6 17:17:57.525: INFO: Number of running nodes: 2, number of available pods: 2
+STEP: Stop a daemon pod, check that the daemon pod is revived.
+Jun  6 17:17:57.590: INFO: Number of nodes with available pods: 1
+Jun  6 17:17:57.590: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:17:58.612: INFO: Number of nodes with available pods: 1
+Jun  6 17:17:58.612: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:17:59.607: INFO: Number of nodes with available pods: 1
+Jun  6 17:17:59.607: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:00.610: INFO: Number of nodes with available pods: 1
+Jun  6 17:18:00.610: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:01.608: INFO: Number of nodes with available pods: 1
+Jun  6 17:18:01.608: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:02.611: INFO: Number of nodes with available pods: 1
+Jun  6 17:18:02.611: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:03.607: INFO: Number of nodes with available pods: 1
+Jun  6 17:18:03.607: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:04.602: INFO: Number of nodes with available pods: 1
+Jun  6 17:18:04.602: INFO: Node cncf-1 is running more than one daemon pod
+Jun  6 17:18:05.609: INFO: Number of nodes with available pods: 2
+Jun  6 17:18:05.609: INFO: Number of running nodes: 2, number of available pods: 2
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3758, will wait for the garbage collector to delete the pods
+Jun  6 17:18:05.696: INFO: Deleting DaemonSet.extensions daemon-set took: 14.688264ms
+Jun  6 17:18:07.197: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.500697204s
+Jun  6 17:18:20.714: INFO: Number of nodes with available pods: 0
+Jun  6 17:18:20.714: INFO: Number of running nodes: 0, number of available pods: 0
+Jun  6 17:18:20.725: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3758/daemonsets","resourceVersion":"3959985939"},"items":null}
+
+Jun  6 17:18:20.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3758/pods","resourceVersion":"3959985940"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:18:20.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-3758" for this suite.
+Jun  6 17:18:26.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:18:27.049: INFO: namespace daemonsets-3758 deletion completed in 6.280026675s
+
+• [SLOW TEST:32.749 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] version v1
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:18:27.054: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun  6 17:18:27.166: INFO: (0) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 16.231932ms)
+Jun  6 17:18:27.176: INFO: (1) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.656883ms)
+Jun  6 17:18:27.194: INFO: (2) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 17.956161ms)
+Jun  6 17:18:27.204: INFO: (3) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.671767ms)
+Jun  6 17:18:27.214: INFO: (4) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.561323ms)
+Jun  6 17:18:27.224: INFO: (5) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.483127ms)
+Jun  6 17:18:27.236: INFO: (6) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.904856ms)
+Jun  6 17:18:27.246: INFO: (7) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.891921ms)
+Jun  6 17:18:27.257: INFO: (8) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.971777ms)
+Jun  6 17:18:27.267: INFO: (9) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.111041ms)
+Jun  6 17:18:27.277: INFO: (10) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 10.786253ms)
+Jun  6 17:18:27.288: INFO: (11) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.971726ms)
+Jun  6 17:18:27.297: INFO: (12) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.467023ms)
+Jun  6 17:18:27.308: INFO: (13) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.15313ms)
+Jun  6 17:18:27.327: INFO: (14) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 18.393211ms)
+Jun  6 17:18:27.337: INFO: (15) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 9.831009ms)
+Jun  6 17:18:27.349: INFO: (16) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 12.116346ms)
+Jun  6 17:18:27.363: INFO: (17) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 13.859418ms)
+Jun  6 17:18:27.376: INFO: (18) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 12.352983ms)
+Jun  6 17:18:27.387: INFO: (19) /api/v1/nodes/cncf-1:10250/proxy/logs/: 
+btmp
+containers/
+faillog... (200; 11.577303ms)
+[AfterEach] version v1
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:18:27.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "proxy-8030" for this suite.
+Jun  6 17:18:33.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:18:33.677: INFO: namespace proxy-8030 deletion completed in 6.277627173s
+
+• [SLOW TEST:6.623 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  version v1
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
+    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:18:33.678: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jun  6 17:18:43.503: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  6 17:18:43.513: INFO: Pod pod-with-prestop-http-hook still exists
+Jun  6 17:18:45.514: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  6 17:18:45.519: INFO: Pod pod-with-prestop-http-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:18:45.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-5443" for this suite.
+Jun  6 17:19:07.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:19:07.806: INFO: namespace container-lifecycle-hook-5443 deletion completed in 22.257219151s
+
+• [SLOW TEST:34.129 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute prestop http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
+  should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:19:07.807: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: executing a command with run --rm and attach with stdin
+Jun  6 17:19:07.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-489975799 --namespace=kubectl-4304 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
+Jun  6 17:19:11.893: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
+Jun  6 17:19:11.893: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
+STEP: verifying the job e2e-test-rm-busybox-job was deleted
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:19:13.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4304" for this suite.
+Jun  6 17:19:22.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:19:22.994: INFO: namespace kubectl-4304 deletion completed in 9.040402767s
+
+• [SLOW TEST:15.188 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run --rm job
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should create a job from an image, then delete the job  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:19:22.996: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name projected-configmap-test-volume-map-3a41cf8f-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating a pod to test consume configMaps
+Jun  6 17:19:23.148: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-9562" to be "success or failure"
+Jun  6 17:19:23.165: INFO: Pod "pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.898716ms
+Jun  6 17:19:25.174: INFO: Pod "pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02599438s
+Jun  6 17:19:27.183: INFO: Pod "pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034460434s
+STEP: Saw pod success
+Jun  6 17:19:27.183: INFO: Pod "pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:19:27.189: INFO: Trying to get logs from node cncf-1 pod pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  6 17:19:27.238: INFO: Waiting for pod pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:19:27.243: INFO: Pod pod-projected-configmaps-3a43a660-887f-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:19:27.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9562" for this suite.
+Jun  6 17:19:33.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:19:33.498: INFO: namespace projected-9562 deletion completed in 6.248824269s
+
+• [SLOW TEST:10.502 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:19:33.498: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jun  6 17:19:33.618: INFO: Waiting up to 5m0s for pod "pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6" in namespace "emptydir-4211" to be "success or failure"
+Jun  6 17:19:33.626: INFO: Pod "pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157788ms
+Jun  6 17:19:35.633: INFO: Pod "pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014705901s
+Jun  6 17:19:37.642: INFO: Pod "pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023829706s
+STEP: Saw pod success
+Jun  6 17:19:37.642: INFO: Pod "pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:19:37.651: INFO: Trying to get logs from node cncf-2 pod pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6 container test-container: 
+STEP: delete the pod
+Jun  6 17:19:37.691: INFO: Waiting for pod pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:19:37.697: INFO: Pod pod-4082555c-887f-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:19:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4211" for this suite.
+Jun  6 17:19:43.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:19:43.980: INFO: namespace emptydir-4211 deletion completed in 6.275040113s
+
+• [SLOW TEST:10.483 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-network] Services 
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:19:43.981: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86
+[It] should provide secure master service  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:19:44.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-9547" for this suite.
+Jun  6 17:19:50.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:19:50.345: INFO: namespace services-9547 deletion completed in 6.23996053s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+
+• [SLOW TEST:6.364 seconds]
+[sig-network] Services
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:19:50.347: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name projected-configmap-test-volume-4a8c9283-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating a pod to test consume configMaps
+Jun  6 17:19:50.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-8829" to be "success or failure"
+Jun  6 17:19:50.495: INFO: Pod "pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863861ms
+Jun  6 17:19:52.503: INFO: Pod "pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017306855s
+Jun  6 17:19:54.512: INFO: Pod "pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02620307s
+STEP: Saw pod success
+Jun  6 17:19:54.512: INFO: Pod "pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:19:54.520: INFO: Trying to get logs from node cncf-1 pod pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  6 17:19:54.566: INFO: Waiting for pod pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:19:54.572: INFO: Pod pod-projected-configmaps-4a8eb465-887f-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:19:54.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8829" for this suite.
+Jun  6 17:20:00.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:20:00.880: INFO: namespace projected-8829 deletion completed in 6.299962149s
+
+• [SLOW TEST:10.532 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
+  creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:20:00.881: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun  6 17:20:00.987: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:20:02.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-3993" for this suite.
+Jun  6 17:20:08.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:20:09.210: INFO: namespace custom-resource-definition-3993 deletion completed in 6.27680077s
+
+• [SLOW TEST:8.329 seconds]
+[sig-api-machinery] CustomResourceDefinition resources
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  Simple CustomResourceDefinition
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
+    creating/deleting custom resource definition objects works  [Conformance]
+    /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:20:09.211: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating the pod
+Jun  6 17:20:13.886: INFO: Successfully updated pod "labelsupdate55ca653a-887f-11e9-b3bf-0e7bbe1a64f6"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:20:15.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8524" for this suite.
+Jun  6 17:20:37.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:20:38.259: INFO: namespace projected-8524 deletion completed in 22.293885108s
+
+• [SLOW TEST:29.048 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] ReplicaSet 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:20:38.260: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun  6 17:20:38.379: INFO: Creating ReplicaSet my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6
+Jun  6 17:20:38.397: INFO: Pod name my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6: Found 0 pods out of 1
+Jun  6 17:20:43.406: INFO: Pod name my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6: Found 1 pods out of 1
+Jun  6 17:20:43.406: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6" is running
+Jun  6 17:20:43.414: INFO: Pod "my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6-8chh9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 17:20:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 17:20:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 17:20:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-06 17:20:38 +0000 UTC Reason: Message:}])
+Jun  6 17:20:43.414: INFO: Trying to dial the pod
+Jun  6 17:20:48.475: INFO: Controller my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6: Got expected result from replica 1 [my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6-8chh9]: "my-hostname-basic-671ea8fa-887f-11e9-b3bf-0e7bbe1a64f6-8chh9", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:20:48.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replicaset-4559" for this suite.
+Jun  6 17:20:55.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:20:56.341: INFO: namespace replicaset-4559 deletion completed in 7.858715799s
+
+• [SLOW TEST:18.081 seconds]
+[sig-apps] ReplicaSet
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:20:56.342: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name s-test-opt-del-71e5f5f9-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating secret with name s-test-opt-upd-71e5f64b-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-71e5f5f9-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Updating secret s-test-opt-upd-71e5f64b-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: Creating secret with name s-test-opt-create-71e5f660-887f-11e9-b3bf-0e7bbe1a64f6
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:21:04.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-5324" for this suite.
+Jun  6 17:21:28.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:21:28.940: INFO: namespace secrets-5324 deletion completed in 24.236175353s
+
+• [SLOW TEST:32.599 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:21:28.940: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+W0606 17:21:30.129905      15 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Jun  6 17:21:30.130: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:21:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-6018" for this suite.
+Jun  6 17:21:36.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:21:36.431: INFO: namespace gc-6018 deletion completed in 6.291631915s
+
+• [SLOW TEST:7.491 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun  6 17:21:36.432: INFO: >>> kubeConfig: /tmp/kubeconfig-489975799
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun  6 17:21:36.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6" in namespace "projected-3978" to be "success or failure"
+Jun  6 17:21:36.579: INFO: Pod "downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.036725ms
+Jun  6 17:21:38.887: INFO: Pod "downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317306832s
+Jun  6 17:21:40.896: INFO: Pod "downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326789916s
+STEP: Saw pod success
+Jun  6 17:21:40.897: INFO: Pod "downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6" satisfied condition "success or failure"
+Jun  6 17:21:40.905: INFO: Trying to get logs from node cncf-1 pod downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6 container client-container: 
+STEP: delete the pod
+Jun  6 17:21:40.966: INFO: Waiting for pod downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6 to disappear
+Jun  6 17:21:40.978: INFO: Pod downwardapi-volume-89ca97b4-887f-11e9-b3bf-0e7bbe1a64f6 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun  6 17:21:40.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3978" for this suite.
+Jun  6 17:21:47.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  6 17:21:47.329: INFO: namespace projected-3978 deletion completed in 6.343052283s
+
+• [SLOW TEST:10.897 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.2-beta.0.85+66049e3b21efe1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSJun  6 17:21:47.331: INFO: Running AfterSuite actions on all nodes
+Jun  6 17:21:47.331: INFO: Running AfterSuite actions on node 1
+Jun  6 17:21:47.331: INFO: Skipping dumping logs from cluster
+
+Ran 204 of 3585 Specs in 6135.413 seconds
+SUCCESS! -- 204 Passed | 0 Failed | 0 Pending | 3381 Skipped PASS
+
+Ginkgo ran 1 suite in 1h42m16.744248603s
+Test Suite Passed
diff --git a/v1.14/ovh/junit_01.xml b/v1.14/ovh/junit_01.xml
new file mode 100644
index 0000000000..8a497c8048
--- /dev/null
+++ b/v1.14/ovh/junit_01.xml
@@ -0,0 +1,10350 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file